Oct 08 13:18:16 crc systemd[1]: Starting Kubernetes Kubelet... Oct 08 13:18:16 crc restorecon[4767]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:16 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 13:18:17 crc restorecon[4767]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 08 13:18:18 crc kubenswrapper[5065]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 13:18:18 crc kubenswrapper[5065]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 08 13:18:18 crc kubenswrapper[5065]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 13:18:18 crc kubenswrapper[5065]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 13:18:18 crc kubenswrapper[5065]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 08 13:18:18 crc kubenswrapper[5065]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.630985 5065 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634688 5065 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634716 5065 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634721 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634725 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634729 5065 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634734 5065 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634738 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634744 5065 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634749 5065 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634755 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634759 5065 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634763 5065 feature_gate.go:330] unrecognized feature gate: Example Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634767 5065 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634771 5065 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634774 5065 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634778 5065 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634781 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634786 5065 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634790 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634794 5065 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634797 5065 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634801 5065 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634804 5065 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634809 5065 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634814 5065 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634818 5065 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634823 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634826 5065 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634830 5065 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634834 5065 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634838 5065 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634843 5065 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634847 5065 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634851 5065 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634854 5065 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634858 5065 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634861 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634865 5065 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634868 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634872 5065 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634877 5065 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634882 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634887 5065 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634891 5065 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634894 5065 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634898 5065 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634902 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634905 5065 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634909 5065 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634912 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634916 5065 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634919 5065 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634922 5065 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634926 5065 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634929 5065 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634932 5065 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634937 5065 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634941 5065 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634944 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634947 5065 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634951 5065 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634954 5065 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634960 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634965 5065 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634970 5065 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634974 5065 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634980 5065 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634984 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634989 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634994 5065 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.634998 5065 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636011 5065 flags.go:64] FLAG: --address="0.0.0.0" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636030 5065 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636041 5065 flags.go:64] FLAG: --anonymous-auth="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636048 5065 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636056 5065 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636062 5065 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636069 5065 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636077 5065 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636082 5065 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636088 5065 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636094 5065 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636099 5065 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636105 5065 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636111 5065 flags.go:64] FLAG: --cgroup-root="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636116 5065 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636121 5065 flags.go:64] FLAG: --client-ca-file="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636126 5065 flags.go:64] FLAG: --cloud-config="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636134 5065 flags.go:64] FLAG: --cloud-provider="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636140 5065 flags.go:64] FLAG: --cluster-dns="[]" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636148 5065 flags.go:64] FLAG: --cluster-domain="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636154 5065 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636159 5065 flags.go:64] FLAG: --config-dir="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636164 5065 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636171 5065 flags.go:64] FLAG: --container-log-max-files="5" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636178 5065 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636184 5065 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636190 5065 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636195 5065 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636201 5065 flags.go:64] FLAG: --contention-profiling="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636208 5065 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636213 5065 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636219 5065 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636225 5065 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636231 5065 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636236 5065 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636242 5065 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636247 5065 flags.go:64] FLAG: --enable-load-reader="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636252 5065 flags.go:64] FLAG: --enable-server="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636257 5065 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636264 5065 flags.go:64] FLAG: --event-burst="100" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636269 5065 flags.go:64] FLAG: --event-qps="50" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636274 5065 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636279 5065 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636284 5065 flags.go:64] FLAG: --eviction-hard="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636291 5065 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636296 5065 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636301 5065 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636306 5065 flags.go:64] FLAG: --eviction-soft="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636311 5065 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636317 5065 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636322 5065 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636327 5065 flags.go:64] FLAG: --experimental-mounter-path="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636332 5065 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636337 5065 flags.go:64] FLAG: --fail-swap-on="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636342 5065 flags.go:64] FLAG: --feature-gates="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636349 5065 flags.go:64] FLAG: --file-check-frequency="20s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636354 5065 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636359 5065 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636365 5065 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636370 5065 flags.go:64] FLAG: --healthz-port="10248" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636376 5065 flags.go:64] FLAG: --help="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636382 5065 flags.go:64] FLAG: --hostname-override="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636388 5065 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636393 5065 flags.go:64] FLAG: --http-check-frequency="20s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636398 5065 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636404 5065 flags.go:64] FLAG: --image-credential-provider-config="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636426 5065 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636434 5065 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636439 5065 flags.go:64] FLAG: --image-service-endpoint="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636444 5065 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636449 5065 flags.go:64] FLAG: --kube-api-burst="100" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636454 5065 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636460 5065 flags.go:64] FLAG: --kube-api-qps="50" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636465 5065 flags.go:64] FLAG: --kube-reserved="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636470 5065 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636476 5065 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636481 5065 flags.go:64] FLAG: --kubelet-cgroups="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636486 5065 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636491 5065 flags.go:64] FLAG: --lock-file="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636497 5065 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636502 5065 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636507 5065 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636516 5065 flags.go:64] FLAG: --log-json-split-stream="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636521 5065 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636527 5065 flags.go:64] FLAG: --log-text-split-stream="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636533 5065 flags.go:64] FLAG: --logging-format="text" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636538 5065 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636544 5065 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636549 5065 flags.go:64] FLAG: --manifest-url="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636555 5065 flags.go:64] FLAG: --manifest-url-header="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636561 5065 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636567 5065 flags.go:64] FLAG: --max-open-files="1000000" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636575 5065 flags.go:64] FLAG: --max-pods="110" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636580 5065 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636585 5065 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636590 5065 flags.go:64] FLAG: --memory-manager-policy="None" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636597 5065 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636602 5065 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636607 5065 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636612 5065 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636626 5065 flags.go:64] FLAG: --node-status-max-images="50" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636631 5065 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636637 5065 flags.go:64] FLAG: --oom-score-adj="-999" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636643 5065 flags.go:64] FLAG: --pod-cidr="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636648 5065 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636656 5065 flags.go:64] FLAG: --pod-manifest-path="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636661 5065 flags.go:64] FLAG: --pod-max-pids="-1" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636667 5065 flags.go:64] FLAG: --pods-per-core="0" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636672 5065 flags.go:64] FLAG: --port="10250" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636677 5065 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636682 5065 flags.go:64] FLAG: --provider-id="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636688 5065 flags.go:64] FLAG: --qos-reserved="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636693 5065 flags.go:64] FLAG: --read-only-port="10255" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636699 5065 flags.go:64] FLAG: --register-node="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636704 5065 flags.go:64] FLAG: --register-schedulable="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636709 5065 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636719 5065 flags.go:64] FLAG: --registry-burst="10" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636724 5065 flags.go:64] FLAG: --registry-qps="5" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636729 5065 flags.go:64] FLAG: --reserved-cpus="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636734 5065 flags.go:64] FLAG: --reserved-memory="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636741 5065 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636746 5065 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636751 5065 flags.go:64] FLAG: --rotate-certificates="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636756 5065 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636761 5065 flags.go:64] FLAG: --runonce="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636767 5065 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636772 5065 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636778 5065 flags.go:64] FLAG: --seccomp-default="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636784 5065 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636789 5065 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636795 5065 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636801 5065 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636806 5065 flags.go:64] FLAG: --storage-driver-password="root" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636812 5065 flags.go:64] FLAG: --storage-driver-secure="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636817 5065 flags.go:64] FLAG: --storage-driver-table="stats" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636822 5065 flags.go:64] FLAG: --storage-driver-user="root" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636827 5065 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636833 5065 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636838 5065 flags.go:64] FLAG: --system-cgroups="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636844 5065 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636854 5065 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636859 5065 flags.go:64] FLAG: --tls-cert-file="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636865 5065 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636873 5065 flags.go:64] FLAG: --tls-min-version="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636878 5065 flags.go:64] FLAG: --tls-private-key-file="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636884 5065 flags.go:64] FLAG: --topology-manager-policy="none" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636889 5065 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636894 5065 flags.go:64] FLAG: --topology-manager-scope="container" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636899 5065 flags.go:64] FLAG: --v="2" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636907 5065 flags.go:64] FLAG: --version="false" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636914 5065 flags.go:64] FLAG: --vmodule="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636920 5065 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.636925 5065 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637073 5065 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637079 5065 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637083 5065 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637087 5065 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637092 5065 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637097 5065 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637100 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637106 5065 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637110 5065 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637114 5065 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637118 5065 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637122 5065 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637126 5065 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637129 5065 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637133 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637136 5065 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637140 5065 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637143 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637147 5065 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637150 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637153 5065 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637157 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637161 5065 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637164 5065 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637168 5065 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637171 5065 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637176 5065 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637181 5065 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637185 5065 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637189 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637192 5065 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637198 5065 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637202 5065 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637205 5065 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637209 5065 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637212 5065 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637216 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637220 5065 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637223 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637227 5065 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637231 5065 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637234 5065 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637238 5065 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637241 5065 feature_gate.go:330] unrecognized feature gate: Example Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637245 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637249 5065 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637253 5065 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637257 5065 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637260 5065 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637264 5065 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637267 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637271 5065 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637274 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637278 5065 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637281 5065 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637285 5065 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637288 5065 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637292 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637297 5065 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637301 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637305 5065 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637309 5065 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637313 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637317 5065 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637321 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637325 5065 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637328 5065 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637332 5065 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637337 5065 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637341 5065 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.637344 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.637351 5065 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.647997 5065 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.648044 5065 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648125 5065 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648134 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648141 5065 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648147 5065 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648153 5065 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648160 5065 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648165 5065 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648170 5065 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648175 5065 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648179 5065 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648184 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648189 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648194 5065 feature_gate.go:330] unrecognized feature gate: Example Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648200 5065 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648205 5065 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648211 5065 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648216 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648222 5065 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648226 5065 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648231 5065 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648236 5065 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648240 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648245 5065 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648250 5065 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648254 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648258 5065 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648263 5065 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648268 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648273 5065 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648278 5065 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648282 5065 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648287 5065 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648291 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648296 5065 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648301 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648305 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648310 5065 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648314 5065 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648319 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648323 5065 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648327 5065 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648332 5065 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648337 5065 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648342 5065 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648346 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648351 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648356 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648360 5065 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648366 5065 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648371 5065 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648375 5065 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648380 5065 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648384 5065 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648388 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648393 5065 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648397 5065 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648401 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648407 5065 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648430 5065 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648436 5065 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648441 5065 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648446 5065 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648452 5065 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648458 5065 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648656 5065 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648660 5065 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648664 5065 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648669 5065 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648675 5065 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648679 5065 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648684 5065 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.648692 5065 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648854 5065 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648863 5065 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648868 5065 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648873 5065 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648878 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648882 5065 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648887 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648891 5065 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648896 5065 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648900 5065 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648905 5065 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648910 5065 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648915 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648919 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648924 5065 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648928 5065 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648932 5065 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648937 5065 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648943 5065 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648949 5065 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648954 5065 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648959 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648964 5065 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648969 5065 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648974 5065 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648979 5065 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648984 5065 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648988 5065 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648993 5065 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.648998 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649002 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649007 5065 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649012 5065 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649017 5065 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649022 5065 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649028 5065 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649034 5065 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649040 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649044 5065 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649049 5065 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649053 5065 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649057 5065 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649064 5065 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649069 5065 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649074 5065 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649080 5065 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649086 5065 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649090 5065 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649095 5065 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649101 5065 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649106 5065 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649110 5065 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649115 5065 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649120 5065 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649124 5065 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649129 5065 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649133 5065 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649140 5065 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649145 5065 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649149 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649154 5065 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649160 5065 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649166 5065 feature_gate.go:330] unrecognized feature gate: Example Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649170 5065 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649175 5065 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649180 5065 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649184 5065 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649189 5065 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649194 5065 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649199 5065 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.649203 5065 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.649212 5065 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.650352 5065 server.go:940] "Client rotation is on, will bootstrap in background" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.656019 5065 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.656147 5065 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.657895 5065 server.go:997] "Starting client certificate rotation" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.657933 5065 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.658140 5065 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-29 03:46:54.170559558 +0000 UTC Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.658269 5065 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1238h28m35.512293799s for next certificate rotation Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.693859 5065 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.695603 5065 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.709490 5065 log.go:25] "Validated CRI v1 runtime API" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.750804 5065 log.go:25] "Validated CRI v1 image API" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.752824 5065 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.759171 5065 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-08-13-12-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.759222 5065 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.789619 5065 manager.go:217] Machine: {Timestamp:2025-10-08 13:18:18.78662951 +0000 UTC m=+0.564011287 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1bc7a529-1398-49b6-b75f-648e257076b7 BootID:137ca619-3348-4004-b5e9-6fba48af3fd0 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:22:2a:bd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:22:2a:bd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2a:7a:68 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:72:e4:fc Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c7:1b:76 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f9:75:90 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:36:8b:7b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:62:7b:f7:66:65:e0 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:cd:96:24:d0:c3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.789988 5065 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.790189 5065 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.790776 5065 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.791081 5065 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.791133 5065 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.791528 5065 topology_manager.go:138] "Creating topology manager with none policy" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.791546 5065 container_manager_linux.go:303] "Creating device plugin manager" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.792242 5065 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.792294 5065 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.793124 5065 state_mem.go:36] "Initialized new in-memory state store" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.793844 5065 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.798320 5065 kubelet.go:418] "Attempting to sync node with API server" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.798357 5065 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.798406 5065 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.798451 5065 kubelet.go:324] "Adding apiserver pod source" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.798469 5065 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.802566 5065 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.803712 5065 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.804540 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.804612 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.804560 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.804655 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.805121 5065 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806576 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806600 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806609 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806618 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806632 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806640 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806649 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806662 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806672 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806683 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806695 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.806704 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.807719 5065 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.808112 5065 server.go:1280] "Started kubelet" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.809335 5065 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.809460 5065 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.809461 5065 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 08 13:18:18 crc systemd[1]: Started Kubernetes Kubelet. Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810208 5065 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810242 5065 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810467 5065 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810630 5065 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810653 5065 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810696 5065 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.810662 5065 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810607 5065 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:29:17.631073738 +0000 UTC Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.810747 5065 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1876h10m58.820336335s for next certificate rotation Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.811346 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.811444 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.811930 5065 factory.go:55] Registering systemd factory Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.811974 5065 factory.go:221] Registration of the systemd container factory successfully Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.811984 5065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="200ms" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.812832 5065 server.go:460] "Adding debug handlers to kubelet server" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.818561 5065 factory.go:153] Registering CRI-O factory Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.818621 5065 factory.go:221] Registration of the crio container factory successfully Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.818816 5065 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.818852 5065 factory.go:103] Registering Raw factory Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.818877 5065 manager.go:1196] Started watching for new ooms in manager Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.818807 5065 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.45:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186c868dfe2ff37e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-08 13:18:18.808087422 +0000 UTC m=+0.585469179,LastTimestamp:2025-10-08 13:18:18.808087422 +0000 UTC m=+0.585469179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.823726 5065 manager.go:319] Starting recovery of all containers Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828217 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828289 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828303 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828314 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828324 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828334 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828364 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828401 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.828431 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.830919 5065 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.831071 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.831822 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.831856 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.831884 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.831927 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834776 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834833 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834857 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834879 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834898 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834912 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834933 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834952 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834966 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834984 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.834998 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835020 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835039 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835057 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835138 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835175 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835186 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835198 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835210 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835220 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835230 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835242 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835255 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835267 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835278 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835289 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835306 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835322 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835903 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835930 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835959 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.835983 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836002 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836014 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836033 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836046 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836065 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836079 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836506 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836536 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836552 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836565 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836584 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836598 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836612 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836623 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836633 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836645 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836657 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836668 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836680 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836692 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836703 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836715 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836729 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836750 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836770 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836782 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836794 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836806 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836818 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836831 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836849 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836869 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836886 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836898 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836923 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836943 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836956 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836969 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836982 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.836997 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837009 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837029 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837041 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837053 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837063 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837075 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837091 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837106 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837118 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837129 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837141 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837153 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837163 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837174 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837186 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837197 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837210 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837221 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837237 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837249 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837261 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837275 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837288 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837304 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837316 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837329 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837341 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837356 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837370 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837381 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837392 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837403 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837465 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837482 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837497 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837509 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837522 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837536 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837549 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837562 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837574 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837586 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837597 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837609 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837621 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837634 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837645 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837657 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837668 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837680 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837692 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837704 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837716 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837729 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837740 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837755 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837767 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837779 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837839 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837852 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837866 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837881 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837893 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837906 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837919 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837932 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837944 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837955 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837967 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837979 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.837994 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838006 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838017 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838029 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838041 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838053 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838067 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838087 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838106 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838118 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838137 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838181 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838194 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838209 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838224 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838238 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838247 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838258 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838266 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838274 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838290 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838299 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838308 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838316 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838325 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838334 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838348 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838358 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838368 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838377 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838386 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838396 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838405 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838534 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838544 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838554 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838563 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838573 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838590 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838604 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838619 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838633 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838642 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838656 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838666 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838674 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838684 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838692 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838700 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838710 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838719 5065 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838726 5065 reconstruct.go:97] "Volume reconstruction finished" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.838740 5065 reconciler.go:26] "Reconciler: start to sync state" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.840165 5065 manager.go:324] Recovery completed Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.855635 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.857385 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.857461 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.857475 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.858453 5065 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.858491 5065 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.858583 5065 state_mem.go:36] "Initialized new in-memory state store" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.865561 5065 policy_none.go:49] "None policy: Start" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.867951 5065 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.867987 5065 state_mem.go:35] "Initializing new in-memory state store" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.869523 5065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.872152 5065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.872225 5065 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.872269 5065 kubelet.go:2335] "Starting kubelet main sync loop" Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.872470 5065 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 08 13:18:18 crc kubenswrapper[5065]: W1008 13:18:18.872973 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.873037 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.911035 5065 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.931594 5065 manager.go:334] "Starting Device Plugin manager" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.931654 5065 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.931671 5065 server.go:79] "Starting device plugin registration server" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.932170 5065 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.932201 5065 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.932456 5065 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.932540 5065 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.932552 5065 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 08 13:18:18 crc kubenswrapper[5065]: E1008 13:18:18.939173 5065 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.972795 5065 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.972893 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.973909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.974054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.974066 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.974268 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.974579 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.974635 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.975346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.975376 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.975385 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.975551 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.975678 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.975721 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976171 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976196 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976207 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976226 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976239 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976247 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976294 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976310 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976325 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976352 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976551 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.976579 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977157 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977164 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977176 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977191 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977201 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977302 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977410 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977456 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977949 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977964 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.977971 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978072 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978087 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978557 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978581 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978591 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978630 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978661 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:18 crc kubenswrapper[5065]: I1008 13:18:18.978668 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:19 crc kubenswrapper[5065]: E1008 13:18:19.012661 5065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="400ms" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.032406 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.033624 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.033655 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.033664 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.033727 5065 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 13:18:19 crc kubenswrapper[5065]: E1008 13:18:19.034224 5065 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.041451 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.041604 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.041712 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.041781 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.041865 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.041927 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.041996 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042059 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042177 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042288 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042371 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042480 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042549 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042614 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.042693 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143590 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143657 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143718 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143765 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143769 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143825 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143789 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143862 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143864 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143892 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143887 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143927 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143948 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143954 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143972 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143977 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.143990 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144014 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144038 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144050 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144058 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144073 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144075 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144101 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144100 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144121 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144167 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144195 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144222 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.144255 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.234768 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.237126 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.237234 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.237346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.237520 5065 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 13:18:19 crc kubenswrapper[5065]: E1008 13:18:19.238720 5065 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.304335 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.310834 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.331443 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: W1008 13:18:19.353143 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6bc9302bab3de260a4bf8a3f9cc153e7f4ca5d97c6c60b7a407125fab41a9ce8 WatchSource:0}: Error finding container 6bc9302bab3de260a4bf8a3f9cc153e7f4ca5d97c6c60b7a407125fab41a9ce8: Status 404 returned error can't find the container with id 6bc9302bab3de260a4bf8a3f9cc153e7f4ca5d97c6c60b7a407125fab41a9ce8 Oct 08 13:18:19 crc kubenswrapper[5065]: W1008 13:18:19.355673 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5ccc6d1c9e308bafd3a824dcbf3ce0bc7f0bc20caa25d1508854175c69ae10ac WatchSource:0}: Error finding container 5ccc6d1c9e308bafd3a824dcbf3ce0bc7f0bc20caa25d1508854175c69ae10ac: Status 404 returned error can't find the container with id 5ccc6d1c9e308bafd3a824dcbf3ce0bc7f0bc20caa25d1508854175c69ae10ac Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.356694 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: W1008 13:18:19.363802 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b5315c07660b686adb564f462f26003b80bb9eb363fbfc411128ce5fc7346cb0 WatchSource:0}: Error finding container b5315c07660b686adb564f462f26003b80bb9eb363fbfc411128ce5fc7346cb0: Status 404 returned error can't find the container with id b5315c07660b686adb564f462f26003b80bb9eb363fbfc411128ce5fc7346cb0 Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.364719 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:19 crc kubenswrapper[5065]: W1008 13:18:19.378091 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-fcf452aa8421cdeff6cd97fd181931dd0d38c3ff62506481bd1b0f21348efb07 WatchSource:0}: Error finding container fcf452aa8421cdeff6cd97fd181931dd0d38c3ff62506481bd1b0f21348efb07: Status 404 returned error can't find the container with id fcf452aa8421cdeff6cd97fd181931dd0d38c3ff62506481bd1b0f21348efb07 Oct 08 13:18:19 crc kubenswrapper[5065]: W1008 13:18:19.386719 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e9f9513ef49f495bb7e1b34eeff68ed906ecd2be17a3b6b2fea979da6535e851 WatchSource:0}: Error finding container e9f9513ef49f495bb7e1b34eeff68ed906ecd2be17a3b6b2fea979da6535e851: Status 404 returned error can't find the container with id e9f9513ef49f495bb7e1b34eeff68ed906ecd2be17a3b6b2fea979da6535e851 Oct 08 13:18:19 crc kubenswrapper[5065]: E1008 13:18:19.413848 5065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="800ms" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.639815 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.641922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.641984 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.641996 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.642023 5065 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 13:18:19 crc kubenswrapper[5065]: E1008 13:18:19.642583 5065 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.810054 5065 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:19 crc kubenswrapper[5065]: W1008 13:18:19.850485 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:19 crc kubenswrapper[5065]: E1008 13:18:19.850615 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.876174 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6bc9302bab3de260a4bf8a3f9cc153e7f4ca5d97c6c60b7a407125fab41a9ce8"} Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.877098 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e9f9513ef49f495bb7e1b34eeff68ed906ecd2be17a3b6b2fea979da6535e851"} Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.878679 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fcf452aa8421cdeff6cd97fd181931dd0d38c3ff62506481bd1b0f21348efb07"} Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.879928 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b5315c07660b686adb564f462f26003b80bb9eb363fbfc411128ce5fc7346cb0"} Oct 08 13:18:19 crc kubenswrapper[5065]: I1008 13:18:19.881897 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ccc6d1c9e308bafd3a824dcbf3ce0bc7f0bc20caa25d1508854175c69ae10ac"} Oct 08 13:18:19 crc kubenswrapper[5065]: W1008 13:18:19.969312 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:19 crc kubenswrapper[5065]: E1008 13:18:19.969403 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:20 crc kubenswrapper[5065]: W1008 13:18:20.005127 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:20 crc kubenswrapper[5065]: E1008 13:18:20.005400 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:20 crc kubenswrapper[5065]: E1008 13:18:20.214766 5065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="1.6s" Oct 08 13:18:20 crc kubenswrapper[5065]: W1008 13:18:20.323632 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:20 crc kubenswrapper[5065]: E1008 13:18:20.323786 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.443661 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.446092 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.446133 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.446143 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.446195 5065 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 13:18:20 crc kubenswrapper[5065]: E1008 13:18:20.446930 5065 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.810500 5065 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.885789 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.885863 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.885890 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.885915 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.885810 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.886984 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.887011 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.887021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.887107 5065 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43" exitCode=0 Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.887166 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.887207 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.890768 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.890829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.890849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.892284 5065 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e" exitCode=0 Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.892458 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.892567 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.892834 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.893737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.893768 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.893777 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.893875 5065 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff" exitCode=0 Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.893990 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.894013 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.894499 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.894550 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.894568 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.894573 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.894591 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.894602 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.896958 5065 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4" exitCode=0 Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.897013 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4"} Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.897071 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.898240 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.898320 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:20 crc kubenswrapper[5065]: I1008 13:18:20.898346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.811351 5065 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:21 crc kubenswrapper[5065]: W1008 13:18:21.813225 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:21 crc kubenswrapper[5065]: E1008 13:18:21.813323 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:21 crc kubenswrapper[5065]: E1008 13:18:21.815897 5065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="3.2s" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.900870 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a4f47fb4e50df5a6c060421f131f23d561f71d0e8bfa1a9769fedf8380d9162f"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.901032 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.910535 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.910574 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.910585 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.914083 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.914124 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.914173 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.914187 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.915002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.915035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.915046 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.919676 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.919740 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.919756 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.919770 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.919774 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.919782 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.920745 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.920778 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.920788 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.922007 5065 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898" exitCode=0 Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.922069 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898"} Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.922085 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.922094 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.922899 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.922944 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.922959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.923065 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.923086 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:21 crc kubenswrapper[5065]: I1008 13:18:21.923097 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:22 crc kubenswrapper[5065]: W1008 13:18:22.012575 5065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Oct 08 13:18:22 crc kubenswrapper[5065]: E1008 13:18:22.012647 5065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.047945 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.049082 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.049116 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.049127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.049150 5065 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 13:18:22 crc kubenswrapper[5065]: E1008 13:18:22.049828 5065 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926489 5065 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2" exitCode=0 Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926606 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926644 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926665 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926715 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926737 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926767 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2"} Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.926810 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.927960 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.927992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928013 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928318 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928341 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928351 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928569 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928600 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928612 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928648 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928669 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:22 crc kubenswrapper[5065]: I1008 13:18:22.928680 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.933712 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149"} Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.933759 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262"} Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.933771 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba"} Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.933782 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150"} Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.933790 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24"} Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.933830 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.934756 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.934791 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:23 crc kubenswrapper[5065]: I1008 13:18:23.934802 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.275098 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.275323 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.276471 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.276502 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.276513 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.936069 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.937371 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.937427 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:24 crc kubenswrapper[5065]: I1008 13:18:24.937439 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.230726 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.230962 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.232067 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.232107 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.232116 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.239279 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.250281 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.251632 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.251674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.251687 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.251711 5065 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.452584 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.452862 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.454953 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.455033 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.455052 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.760992 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.761201 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.762671 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.762715 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.762726 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.808718 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.938262 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.938306 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.938273 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.939380 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.939429 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.939439 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.939534 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.939575 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:25 crc kubenswrapper[5065]: I1008 13:18:25.939591 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.546407 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.546665 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.547798 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.547852 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.547864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.940001 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.940782 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.940809 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:26 crc kubenswrapper[5065]: I1008 13:18:26.940818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:28 crc kubenswrapper[5065]: E1008 13:18:28.939469 5065 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.418300 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.418538 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.422623 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.422674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.422688 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.834456 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.834638 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.835962 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.836021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:29 crc kubenswrapper[5065]: I1008 13:18:29.836034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.451897 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.452200 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.454296 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.454345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.454355 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.457453 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.949375 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.950778 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.950881 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:30 crc kubenswrapper[5065]: I1008 13:18:30.950909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:32 crc kubenswrapper[5065]: I1008 13:18:32.198827 5065 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 08 13:18:32 crc kubenswrapper[5065]: I1008 13:18:32.199541 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 08 13:18:32 crc kubenswrapper[5065]: I1008 13:18:32.204740 5065 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 08 13:18:32 crc kubenswrapper[5065]: I1008 13:18:32.204824 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 08 13:18:33 crc kubenswrapper[5065]: I1008 13:18:33.452052 5065 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 08 13:18:33 crc kubenswrapper[5065]: I1008 13:18:33.452128 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 08 13:18:34 crc kubenswrapper[5065]: I1008 13:18:34.275926 5065 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 13:18:34 crc kubenswrapper[5065]: I1008 13:18:34.275999 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.766761 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.767398 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.767878 5065 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.768021 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.768505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.768595 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.768661 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.772009 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.961077 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.961596 5065 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.961653 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.962293 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.962347 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:35 crc kubenswrapper[5065]: I1008 13:18:35.962358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.195584 5065 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.198019 5065 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.198613 5065 trace.go:236] Trace[405149799]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 13:18:22.844) (total time: 14354ms): Oct 08 13:18:37 crc kubenswrapper[5065]: Trace[405149799]: ---"Objects listed" error: 14353ms (13:18:37.198) Oct 08 13:18:37 crc kubenswrapper[5065]: Trace[405149799]: [14.354034889s] [14.354034889s] END Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.198664 5065 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.202088 5065 trace.go:236] Trace[1349254930]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 13:18:23.289) (total time: 13912ms): Oct 08 13:18:37 crc kubenswrapper[5065]: Trace[1349254930]: ---"Objects listed" error: 13912ms (13:18:37.202) Oct 08 13:18:37 crc kubenswrapper[5065]: Trace[1349254930]: [13.912858643s] [13.912858643s] END Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.202257 5065 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 08 13:18:37 crc kubenswrapper[5065]: E1008 13:18:37.203274 5065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 08 13:18:37 crc kubenswrapper[5065]: E1008 13:18:37.204983 5065 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.206106 5065 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.809042 5065 apiserver.go:52] "Watching apiserver" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.811052 5065 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.811248 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-7d2jj","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.811511 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.811668 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:37 crc kubenswrapper[5065]: E1008 13:18:37.811727 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.811841 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.812203 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:37 crc kubenswrapper[5065]: E1008 13:18:37.812593 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.812688 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.812932 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:37 crc kubenswrapper[5065]: E1008 13:18:37.813016 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.812962 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.813784 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.813878 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.814095 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.816043 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.816150 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.816195 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.816089 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.816116 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.816965 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.817128 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.817130 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.818724 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.830605 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.837659 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dkvkk"] Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.838064 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96g69"] Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.838816 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.838978 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dkvkk" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.839996 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-f2pbj"] Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.840261 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8xgfx"] Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.840445 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.840982 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.841731 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.841975 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.843099 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.843848 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.844258 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.844553 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.844617 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.844835 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.848923 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.848985 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849020 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849018 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849050 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849148 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849347 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849475 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849544 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849656 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.849839 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.850138 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.863270 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.877522 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.888874 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.896853 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.903687 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.912141 5065 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.912564 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.920288 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.934261 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.942144 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.951211 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.958882 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.968255 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.969961 5065 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6" exitCode=255 Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.970010 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6"} Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.979571 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.989260 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:37 crc kubenswrapper[5065]: I1008 13:18:37.995833 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.003516 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.003633 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.003665 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.003688 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.003713 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.003734 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.003755 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004050 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004117 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004150 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004198 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004388 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004508 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004804 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004832 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004955 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.004988 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005040 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005046 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005066 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005077 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005146 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005183 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005214 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005245 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005337 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005391 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005443 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005479 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005485 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005511 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005543 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005574 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005608 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005638 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005673 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005708 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005712 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005738 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005777 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005807 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005838 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005874 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005891 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005906 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005936 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005969 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.005986 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006000 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006030 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006064 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006096 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006126 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006158 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006189 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006223 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006234 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006257 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006294 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006290 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006358 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006390 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006446 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006482 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006515 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006541 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006559 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006572 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006603 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006637 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006670 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006702 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006732 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006764 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006794 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006823 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006867 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006899 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006935 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.006969 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007005 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007011 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007030 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007036 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007132 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007148 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007166 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007200 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007226 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007262 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007353 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007654 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.007865 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008087 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008177 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008325 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008388 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008509 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008526 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008582 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008922 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008961 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.008983 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009006 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009031 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009054 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009076 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009101 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009124 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009168 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009308 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009653 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.009889 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.010066 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.010122 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.010176 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.010216 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.010308 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.010648 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.011160 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.011491 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.011725 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.011816 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012236 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012390 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.010349 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012503 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012535 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012594 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012621 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012646 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012669 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012694 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012717 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012738 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012760 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012781 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012803 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012829 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012853 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012877 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012929 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012955 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012953 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.012979 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013005 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013029 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013052 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013074 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013096 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013118 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013143 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013167 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013211 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013236 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013261 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013283 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013306 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013329 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013357 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013381 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013405 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013446 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013470 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013492 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013516 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013524 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013540 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013553 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013673 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013730 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013738 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013794 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013816 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013872 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013889 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013908 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013924 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013948 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013964 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013980 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.013997 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014015 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014033 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014050 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014067 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014085 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014109 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014133 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014153 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014171 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014187 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014204 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014220 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014238 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014254 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014269 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014286 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014301 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014317 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014333 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014347 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014366 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014396 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014446 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014464 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014480 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014496 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014513 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014530 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014548 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014612 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014631 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014649 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014673 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014694 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014717 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014735 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014759 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014784 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014802 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014817 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014835 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014852 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014869 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014886 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014903 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014920 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014936 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014953 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014971 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014988 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015004 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015023 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015041 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015058 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015074 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015089 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015105 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015121 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015138 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015153 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015170 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015193 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015268 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015296 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-system-cni-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015315 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-netns\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015333 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015355 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015381 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015434 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015461 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-env-overrides\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015522 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-socket-dir-parent\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015543 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-daemon-config\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015566 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-cni-bin\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015587 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-bin\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015610 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nt5\" (UniqueName: \"kubernetes.io/projected/43581862-a068-411a-b8f4-c06aa7951856-kube-api-access-p2nt5\") pod \"node-resolver-7d2jj\" (UID: \"43581862-a068-411a-b8f4-c06aa7951856\") " pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015633 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-mcd-auth-proxy-config\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015869 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgs67\" (UniqueName: \"kubernetes.io/projected/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-kube-api-access-kgs67\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015898 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-rootfs\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015922 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015944 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-kubelet\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015970 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015992 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-systemd-units\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016011 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-cnibin\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016035 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016061 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-ovn\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016082 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-cnibin\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016105 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-system-cni-dir\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016129 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-proxy-tls\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016149 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-hostroot\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016175 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016200 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-etc-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016221 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/43581862-a068-411a-b8f4-c06aa7951856-hosts-file\") pod \"node-resolver-7d2jj\" (UID: \"43581862-a068-411a-b8f4-c06aa7951856\") " pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016242 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/21825a9e-72d6-4850-af25-cafacf1ffff4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016265 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-ovn-kubernetes\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016286 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-script-lib\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016311 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016333 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-multus-certs\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016356 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016379 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-config\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016403 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovn-node-metrics-cert\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016479 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016509 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21825a9e-72d6-4850-af25-cafacf1ffff4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016552 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx46q\" (UniqueName: \"kubernetes.io/projected/21825a9e-72d6-4850-af25-cafacf1ffff4-kube-api-access-rx46q\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016583 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016603 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-netd\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016630 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016652 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-os-release\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016672 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-netns\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016693 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-k8s-cni-cncf-io\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016714 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-etc-kubernetes\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016739 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016764 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016789 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-log-socket\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016810 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-kubelet\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016830 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-node-log\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016849 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xftmm\" (UniqueName: \"kubernetes.io/projected/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-kube-api-access-xftmm\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016944 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdsf\" (UniqueName: \"kubernetes.io/projected/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-kube-api-access-nwdsf\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016971 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-var-lib-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016998 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-cni-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017019 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-cni-binary-copy\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017041 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-conf-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017063 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017084 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-cni-multus\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017107 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-os-release\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017130 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-slash\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017150 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-systemd\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017224 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017242 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017720 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.020638 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021067 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.025502 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014077 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014859 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014832 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.014914 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015159 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015448 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015639 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015694 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.027292 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.015767 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016031 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.016506 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017157 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017202 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017235 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.017678 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:18:38.517292176 +0000 UTC m=+20.294674013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017675 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017739 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.017815 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.018144 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.018313 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.018390 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.018653 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.018739 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.018932 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021006 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021198 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021225 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021226 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021459 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021521 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021563 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021588 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021664 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.021961 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.022047 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.022183 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.022178 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.023778 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.024521 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.024522 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.025279 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.025618 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.025883 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.025870 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.028397 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.028446 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029084 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029133 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029159 5065 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029190 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029215 5065 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029243 5065 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029266 5065 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029294 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029317 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029345 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029368 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029396 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029434 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029463 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029484 5065 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029507 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029547 5065 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029570 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029602 5065 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029624 5065 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029652 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.029674 5065 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030149 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.030564 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:38.53047857 +0000 UTC m=+20.307860327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.030581 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:38.530573023 +0000 UTC m=+20.307954780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030615 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030804 5065 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030848 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030864 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030890 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030907 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030922 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030937 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030961 5065 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030976 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030994 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.030994 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031011 5065 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031055 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031064 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031077 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031093 5065 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031120 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031127 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031221 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031242 5065 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031261 5065 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031309 5065 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031395 5065 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031395 5065 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.031552 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.032061 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.032188 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.032221 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.032341 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.032684 5065 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.033058 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.034379 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.036325 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.038913 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047140 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047434 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047585 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047718 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047788 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047802 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047861 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.047867 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048046 5065 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048272 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048279 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048494 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048643 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048940 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.049047 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048982 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.048977 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.049141 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.049371 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.049871 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.049889 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.049905 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.049992 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.050032 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.050165 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.050271 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.050330 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.050359 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.050372 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.050494 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:38.550449927 +0000 UTC m=+20.327831784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.050738 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.050928 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.050951 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.050964 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.051018 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:38.550998132 +0000 UTC m=+20.328379889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051001 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051103 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051222 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051317 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051477 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051495 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051598 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051609 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.051849 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.052060 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.052169 5065 scope.go:117] "RemoveContainer" containerID="dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.052312 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.052521 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.052673 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.052956 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.054612 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.056198 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.056232 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.056080 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.056328 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.056448 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.057579 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.057731 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.057991 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.061483 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.062977 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.063733 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.063921 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.064483 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.064510 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.064621 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.064762 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.064964 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.065456 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.065613 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.065705 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.065902 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.066062 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.066277 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.066429 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.066897 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.066934 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067037 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067138 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067333 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067369 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067442 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067468 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067725 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067832 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.068001 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.068124 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067340 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.067221 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.068291 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.068453 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.068983 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.069216 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.070142 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.070269 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.070353 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.070629 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.071117 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.072493 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.072760 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.075046 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.076145 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.076095 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.076363 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.076794 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.080038 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.090551 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.100108 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.105316 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.107047 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.109788 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.117486 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.134760 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 13:18:38 crc kubenswrapper[5065]: W1008 13:18:38.146937 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-ba998545fc254a3b00645e18d1306f7c084078cc4a2052b540b92ff5d5eb1821 WatchSource:0}: Error finding container ba998545fc254a3b00645e18d1306f7c084078cc4a2052b540b92ff5d5eb1821: Status 404 returned error can't find the container with id ba998545fc254a3b00645e18d1306f7c084078cc4a2052b540b92ff5d5eb1821 Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148474 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-os-release\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148517 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-netns\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148562 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21825a9e-72d6-4850-af25-cafacf1ffff4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148585 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx46q\" (UniqueName: \"kubernetes.io/projected/21825a9e-72d6-4850-af25-cafacf1ffff4-kube-api-access-rx46q\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148620 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148637 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-netd\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148658 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-k8s-cni-cncf-io\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148678 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-etc-kubernetes\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148702 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-log-socket\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148717 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-kubelet\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148733 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-node-log\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148750 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xftmm\" (UniqueName: \"kubernetes.io/projected/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-kube-api-access-xftmm\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148763 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-cni-binary-copy\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148780 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-conf-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148793 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwdsf\" (UniqueName: \"kubernetes.io/projected/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-kube-api-access-nwdsf\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148809 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-var-lib-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148824 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-cni-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148838 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-slash\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148872 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-systemd\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148887 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148903 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-cni-multus\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148921 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-os-release\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148940 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148958 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-system-cni-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148976 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-netns\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.148993 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-socket-dir-parent\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149011 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-daemon-config\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149036 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-env-overrides\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149068 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-mcd-auth-proxy-config\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149106 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgs67\" (UniqueName: \"kubernetes.io/projected/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-kube-api-access-kgs67\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149121 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-cni-bin\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149138 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-bin\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149152 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nt5\" (UniqueName: \"kubernetes.io/projected/43581862-a068-411a-b8f4-c06aa7951856-kube-api-access-p2nt5\") pod \"node-resolver-7d2jj\" (UID: \"43581862-a068-411a-b8f4-c06aa7951856\") " pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149165 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-kubelet\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149179 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-rootfs\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149201 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-systemd-units\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149217 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-cnibin\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149231 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149249 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-cnibin\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149278 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-system-cni-dir\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149295 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-ovn\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149323 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-etc-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149337 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/43581862-a068-411a-b8f4-c06aa7951856-hosts-file\") pod \"node-resolver-7d2jj\" (UID: \"43581862-a068-411a-b8f4-c06aa7951856\") " pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149352 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-proxy-tls\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149366 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-hostroot\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149387 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-multus-certs\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149402 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/21825a9e-72d6-4850-af25-cafacf1ffff4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149432 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-ovn-kubernetes\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149452 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-script-lib\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149467 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149482 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-config\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149497 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovn-node-metrics-cert\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149551 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149561 5065 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149570 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149581 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149594 5065 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149603 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149612 5065 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149621 5065 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149636 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149649 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149663 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149714 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-socket-dir-parent\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149738 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149749 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149759 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149768 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149777 5065 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149786 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149809 5065 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149821 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149831 5065 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149840 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149855 5065 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149865 5065 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149875 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149884 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149892 5065 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149888 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-os-release\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149902 5065 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149943 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149954 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149964 5065 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149974 5065 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149987 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.149997 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150009 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150018 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150028 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150038 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150048 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150058 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150066 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150075 5065 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150083 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150092 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150102 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150111 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150119 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150127 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150136 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150145 5065 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150154 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150162 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150171 5065 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150179 5065 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150188 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150197 5065 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150206 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150214 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150223 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150233 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150244 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150253 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150262 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150270 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150279 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150287 5065 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150296 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150315 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150324 5065 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150333 5065 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150342 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150353 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150362 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150371 5065 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150378 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150387 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150395 5065 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150404 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150433 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150444 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150454 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150463 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150472 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150480 5065 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150489 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150497 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150505 5065 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150513 5065 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150522 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150530 5065 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150538 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150547 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150555 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150563 5065 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150572 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150581 5065 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150591 5065 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150600 5065 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150609 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150617 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150625 5065 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150635 5065 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150643 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150651 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150660 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150668 5065 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150676 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150685 5065 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150694 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150703 5065 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150711 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150720 5065 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150729 5065 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150737 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150745 5065 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150753 5065 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150762 5065 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150770 5065 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150779 5065 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150787 5065 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150795 5065 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150803 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150811 5065 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150819 5065 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150828 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150837 5065 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150845 5065 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150827 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-daemon-config\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150853 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150887 5065 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150897 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150906 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150935 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150945 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150959 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150969 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150979 5065 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150990 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151055 5065 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151065 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151097 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151124 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151134 5065 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151143 5065 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151152 5065 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151160 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151169 5065 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151177 5065 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.150872 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-netns\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151255 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-conf-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151290 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-env-overrides\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151381 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21825a9e-72d6-4850-af25-cafacf1ffff4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151589 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-var-lib-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151688 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151700 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-multus-cni-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151728 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-netd\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151739 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-slash\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151768 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-k8s-cni-cncf-io\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151774 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-systemd\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151791 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-etc-kubernetes\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151814 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-log-socket\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151854 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-kubelet\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.151875 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-node-log\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152350 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-cni-multus\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152436 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-os-release\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152476 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152522 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-system-cni-dir\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152531 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-cni-bin\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152554 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-netns\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152573 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-bin\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152617 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-cni-binary-copy\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152688 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/43581862-a068-411a-b8f4-c06aa7951856-hosts-file\") pod \"node-resolver-7d2jj\" (UID: \"43581862-a068-411a-b8f4-c06aa7951856\") " pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152736 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-var-lib-kubelet\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152756 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-rootfs\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152771 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-systemd-units\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152999 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-cnibin\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153036 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-system-cni-dir\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153059 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-ovn\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153080 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-etc-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153127 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-ovn-kubernetes\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153148 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-hostroot\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153167 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-host-run-multus-certs\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153314 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-openvswitch\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153680 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/21825a9e-72d6-4850-af25-cafacf1ffff4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.153831 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-script-lib\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.154149 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.154314 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-config\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.152788 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-cnibin\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.154530 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21825a9e-72d6-4850-af25-cafacf1ffff4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.155384 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-mcd-auth-proxy-config\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.157566 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.157836 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-proxy-tls\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.158182 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovn-node-metrics-cert\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.171029 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nt5\" (UniqueName: \"kubernetes.io/projected/43581862-a068-411a-b8f4-c06aa7951856-kube-api-access-p2nt5\") pod \"node-resolver-7d2jj\" (UID: \"43581862-a068-411a-b8f4-c06aa7951856\") " pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.172338 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx46q\" (UniqueName: \"kubernetes.io/projected/21825a9e-72d6-4850-af25-cafacf1ffff4-kube-api-access-rx46q\") pod \"multus-additional-cni-plugins-8xgfx\" (UID: \"21825a9e-72d6-4850-af25-cafacf1ffff4\") " pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.182773 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgs67\" (UniqueName: \"kubernetes.io/projected/0ee6fc83-d6a5-4808-bea3-6fa4978bad1f-kube-api-access-kgs67\") pod \"machine-config-daemon-f2pbj\" (UID: \"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\") " pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.182972 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.186425 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.186901 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xftmm\" (UniqueName: \"kubernetes.io/projected/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-kube-api-access-xftmm\") pod \"ovnkube-node-96g69\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.193984 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwdsf\" (UniqueName: \"kubernetes.io/projected/ddc2ce1c-bf76-4663-a2d6-e518ff7a4678-kube-api-access-nwdsf\") pod \"multus-dkvkk\" (UID: \"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\") " pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: W1008 13:18:38.199847 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee6fc83_d6a5_4808_bea3_6fa4978bad1f.slice/crio-842180e66de2a394c47ccd77e2f4f005cdf49e93cfa818806c436579fdf49c12 WatchSource:0}: Error finding container 842180e66de2a394c47ccd77e2f4f005cdf49e93cfa818806c436579fdf49c12: Status 404 returned error can't find the container with id 842180e66de2a394c47ccd77e2f4f005cdf49e93cfa818806c436579fdf49c12 Oct 08 13:18:38 crc kubenswrapper[5065]: W1008 13:18:38.204179 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21825a9e_72d6_4850_af25_cafacf1ffff4.slice/crio-71c65fd0fcf61dc3e64ff1474002a660379483f358113dde7b080f3de77802df WatchSource:0}: Error finding container 71c65fd0fcf61dc3e64ff1474002a660379483f358113dde7b080f3de77802df: Status 404 returned error can't find the container with id 71c65fd0fcf61dc3e64ff1474002a660379483f358113dde7b080f3de77802df Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.426630 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.445148 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7d2jj" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.465788 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.473314 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dkvkk" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.555311 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.555449 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.555488 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.555517 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.555547 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.555660 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.555718 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:39.555698356 +0000 UTC m=+21.333080113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556103 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:18:39.556092116 +0000 UTC m=+21.333473873 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556184 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556200 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556211 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556239 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:39.55623018 +0000 UTC m=+21.333611937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556293 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556304 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556312 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556340 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:39.556331233 +0000 UTC m=+21.333713000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556372 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: E1008 13:18:38.556396 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:39.556389194 +0000 UTC m=+21.333770951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.876743 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.877447 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.878125 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.878907 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.879598 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.880128 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.880739 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.881326 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.881944 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.882562 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.883140 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.884135 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.884844 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.885554 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.886248 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.886879 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.887381 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.888005 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.888521 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.889248 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.890013 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.890617 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.891294 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.891863 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.892702 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.893263 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.893994 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.895435 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.895909 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.896527 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.897329 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.897829 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.898315 5065 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.898466 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.899876 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.900452 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.900945 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.902099 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.902803 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.903455 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.904203 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.904962 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.905627 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.908577 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.909351 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.910509 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.911234 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.912089 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.912805 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.913826 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.914585 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.915653 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.916261 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.916895 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.917385 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.917996 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.918615 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.919691 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.920225 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.926752 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.933594 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.946610 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.961678 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.974066 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.974136 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.974154 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"842180e66de2a394c47ccd77e2f4f005cdf49e93cfa818806c436579fdf49c12"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.976252 5065 generic.go:334] "Generic (PLEG): container finished" podID="21825a9e-72d6-4850-af25-cafacf1ffff4" containerID="3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1" exitCode=0 Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.976256 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerDied","Data":"3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.976348 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerStarted","Data":"71c65fd0fcf61dc3e64ff1474002a660379483f358113dde7b080f3de77802df"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.981518 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.981574 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.981589 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ba998545fc254a3b00645e18d1306f7c084078cc4a2052b540b92ff5d5eb1821"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.983167 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerStarted","Data":"72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.983223 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerStarted","Data":"1334b26e8130ec51b85c85683abaeea4d82e377ebeef90ffc50cbc751827f733"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.984851 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.984886 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ec5f121022b80699905960988546cc9e8bc8196f9744171dc03d13f412a8e5f7"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.985210 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.985826 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"75242cd1847f90f9a3166ae221fdfb10b4ab1dd7b2a2a8e8a1826651b700eb62"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.990565 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.993555 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.993721 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.996529 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af" exitCode=0 Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.996584 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.996606 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"3d8ae8dae4bbfd436440942d2f844a7d842e9c0bbf66a6f7d1d62703e371bb55"} Oct 08 13:18:38 crc kubenswrapper[5065]: I1008 13:18:38.999092 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.000431 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7d2jj" event={"ID":"43581862-a068-411a-b8f4-c06aa7951856","Type":"ContainerStarted","Data":"d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706"} Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.000469 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7d2jj" event={"ID":"43581862-a068-411a-b8f4-c06aa7951856","Type":"ContainerStarted","Data":"b6b2a337693abbf94cba33813b7c913eee786e6138d5c9edf603288eae980e1e"} Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.012241 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.028568 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.047772 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.067298 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.082168 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.096375 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.114348 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.124396 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.135929 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.147339 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.158476 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.172003 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.186703 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.199612 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.447022 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.463844 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.463978 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.464639 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.479624 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.495482 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.508033 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.520486 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.532795 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.554326 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.566529 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.566736 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:18:41.566701297 +0000 UTC m=+23.344083054 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.566798 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.566870 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.566910 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.566940 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567025 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567065 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:41.567058367 +0000 UTC m=+23.344440124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567175 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567223 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567243 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567374 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567315 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:41.567294063 +0000 UTC m=+23.344675820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567323 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567652 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567668 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567617 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:41.567599461 +0000 UTC m=+23.344981208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.567747 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:41.567721975 +0000 UTC m=+23.345103732 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.568912 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.585344 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.607774 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.622512 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.635963 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.654367 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.668578 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.681527 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.694817 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.712058 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.722802 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.734893 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.757756 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.773900 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.787337 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.805503 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.820011 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.833245 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.873400 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.873516 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.873614 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:39 crc kubenswrapper[5065]: I1008 13:18:39.873662 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.873717 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:39 crc kubenswrapper[5065]: E1008 13:18:39.874099 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.008487 5065 generic.go:334] "Generic (PLEG): container finished" podID="21825a9e-72d6-4850-af25-cafacf1ffff4" containerID="171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5" exitCode=0 Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.008584 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerDied","Data":"171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5"} Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.018767 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.018810 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.018827 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} Oct 08 13:18:40 crc kubenswrapper[5065]: E1008 13:18:40.029341 5065 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.029757 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.048528 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fdcv2"] Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.050119 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.054741 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.055897 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.056149 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.061254 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.061686 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.075534 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.086252 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.100141 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.112120 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.133202 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.148217 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.165089 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.174075 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9vt\" (UniqueName: \"kubernetes.io/projected/4fbb1473-7275-422e-b8fd-e4f9869950d8-kube-api-access-cw9vt\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.174111 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fbb1473-7275-422e-b8fd-e4f9869950d8-host\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.174137 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4fbb1473-7275-422e-b8fd-e4f9869950d8-serviceca\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.184427 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.197907 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.213826 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.255516 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.274557 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4fbb1473-7275-422e-b8fd-e4f9869950d8-serviceca\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.274627 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw9vt\" (UniqueName: \"kubernetes.io/projected/4fbb1473-7275-422e-b8fd-e4f9869950d8-kube-api-access-cw9vt\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.274650 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fbb1473-7275-422e-b8fd-e4f9869950d8-host\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.274713 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fbb1473-7275-422e-b8fd-e4f9869950d8-host\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.276096 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4fbb1473-7275-422e-b8fd-e4f9869950d8-serviceca\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.282050 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.315455 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw9vt\" (UniqueName: \"kubernetes.io/projected/4fbb1473-7275-422e-b8fd-e4f9869950d8-kube-api-access-cw9vt\") pod \"node-ca-fdcv2\" (UID: \"4fbb1473-7275-422e-b8fd-e4f9869950d8\") " pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.340077 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.372526 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fdcv2" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.379603 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: W1008 13:18:40.400582 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fbb1473_7275_422e_b8fd_e4f9869950d8.slice/crio-8962e64b6f324cbfab2025ac2d1d6529e269aea4d6ff2df707e6fdadb6aaefe5 WatchSource:0}: Error finding container 8962e64b6f324cbfab2025ac2d1d6529e269aea4d6ff2df707e6fdadb6aaefe5: Status 404 returned error can't find the container with id 8962e64b6f324cbfab2025ac2d1d6529e269aea4d6ff2df707e6fdadb6aaefe5 Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.422002 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.470849 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.478358 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.483021 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.501534 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.519752 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.562753 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.600123 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.643770 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.680244 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.722310 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.767358 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.800951 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.838315 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.877274 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.920065 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:40 crc kubenswrapper[5065]: I1008 13:18:40.959187 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:40Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.004486 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.027442 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.027508 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.027529 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.028477 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fdcv2" event={"ID":"4fbb1473-7275-422e-b8fd-e4f9869950d8","Type":"ContainerStarted","Data":"f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba"} Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.028557 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fdcv2" event={"ID":"4fbb1473-7275-422e-b8fd-e4f9869950d8","Type":"ContainerStarted","Data":"8962e64b6f324cbfab2025ac2d1d6529e269aea4d6ff2df707e6fdadb6aaefe5"} Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.030584 5065 generic.go:334] "Generic (PLEG): container finished" podID="21825a9e-72d6-4850-af25-cafacf1ffff4" containerID="bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4" exitCode=0 Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.030642 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerDied","Data":"bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4"} Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.032092 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f"} Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.039575 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.082953 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.120496 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.160827 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.201458 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.239588 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.284093 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.320204 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.359991 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.398399 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.438823 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.488782 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.520783 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.561022 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.589489 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.589570 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.589596 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.589620 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589649 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:18:45.589629853 +0000 UTC m=+27.367011610 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.589672 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589709 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589741 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:45.589733316 +0000 UTC m=+27.367115073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589752 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589764 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589774 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589798 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:45.589791788 +0000 UTC m=+27.367173545 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589835 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589873 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589889 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.589948 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:45.589927861 +0000 UTC m=+27.367309708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.590014 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.590124 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:45.590095976 +0000 UTC m=+27.367477813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.601783 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.638516 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.681070 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.719265 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.761868 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.805250 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.844729 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.873190 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.873341 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.873557 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.873707 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.873810 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:41 crc kubenswrapper[5065]: E1008 13:18:41.873991 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.884216 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.922876 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.961030 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:41 crc kubenswrapper[5065]: I1008 13:18:41.998566 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:41Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.038042 5065 generic.go:334] "Generic (PLEG): container finished" podID="21825a9e-72d6-4850-af25-cafacf1ffff4" containerID="1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430" exitCode=0 Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.038130 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerDied","Data":"1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430"} Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.041570 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.078155 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.125146 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.161187 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.201335 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.240699 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.277970 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.322214 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.359261 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.404320 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.458447 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.483123 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.522100 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.563998 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.603267 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:42 crc kubenswrapper[5065]: I1008 13:18:42.642785 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:42Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.043378 5065 generic.go:334] "Generic (PLEG): container finished" podID="21825a9e-72d6-4850-af25-cafacf1ffff4" containerID="477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74" exitCode=0 Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.043479 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerDied","Data":"477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74"} Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.047966 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.065622 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.089620 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.104799 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.119709 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.134788 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.145642 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.161475 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.172512 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.184004 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.202552 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.214142 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.224462 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.236072 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.245474 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.253151 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.605142 5065 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.607317 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.607390 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.607408 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.607572 5065 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.615748 5065 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.616029 5065 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.617256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.617308 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.617330 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.617352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.617369 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.635084 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.639380 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.639431 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.639444 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.639464 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.639477 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.659619 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.670756 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.670823 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.670848 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.670878 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.670896 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.687981 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.692753 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.692804 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.692818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.692836 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.692849 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.706070 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.709715 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.709747 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.709757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.709772 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.709786 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.727601 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:43Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.727710 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.729092 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.729128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.729141 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.729159 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.729170 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.831784 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.831828 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.831839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.831853 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.831865 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.872845 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.872966 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.873083 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.873258 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.873318 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:43 crc kubenswrapper[5065]: E1008 13:18:43.873685 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.933809 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.933851 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.933859 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.933882 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:43 crc kubenswrapper[5065]: I1008 13:18:43.933894 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:43Z","lastTransitionTime":"2025-10-08T13:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.035736 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.035987 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.036088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.036164 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.036233 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.055637 5065 generic.go:334] "Generic (PLEG): container finished" podID="21825a9e-72d6-4850-af25-cafacf1ffff4" containerID="227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b" exitCode=0 Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.055700 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerDied","Data":"227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.068267 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.079268 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.089644 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.112167 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.130061 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.139491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.139545 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.139561 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.139582 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.139599 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.157215 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.172609 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.182465 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.194973 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.211007 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.224847 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.242845 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.242895 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.242907 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.242925 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.242936 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.247863 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.262761 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.272645 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.285326 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:44Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.345430 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.345468 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.345477 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.345493 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.345503 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.448046 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.448105 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.448126 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.448151 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.448173 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.550973 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.551017 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.551030 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.551048 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.551060 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.653641 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.653710 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.653733 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.653763 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.653787 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.756964 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.757012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.757021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.757036 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.757045 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.859710 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.859752 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.859760 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.859778 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.859788 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.961710 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.961745 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.961754 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.961768 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:44 crc kubenswrapper[5065]: I1008 13:18:44.961777 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:44Z","lastTransitionTime":"2025-10-08T13:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063316 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063344 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063375 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063384 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063400 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063462 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063698 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.063713 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.068589 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" event={"ID":"21825a9e-72d6-4850-af25-cafacf1ffff4","Type":"ContainerStarted","Data":"d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.074914 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.089854 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.090684 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.092211 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.105745 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.118809 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.134487 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.154378 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.166494 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.166530 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.166540 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.166553 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.166562 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.168133 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.183392 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.204189 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.216303 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.231494 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.245562 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.261805 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.269651 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.269693 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.269701 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.269715 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.269724 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.275116 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.283411 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.296290 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.308107 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.319500 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.327866 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.344597 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.363203 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.372519 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.372559 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.372570 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.372589 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.372604 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.376004 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.387838 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.406633 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.431053 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.447408 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.465884 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.474378 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.474449 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.474466 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.474487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.474502 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.479477 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.489975 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.501491 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.577076 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.577115 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.577128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.577143 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.577154 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.629844 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.629966 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.629996 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.630020 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.630040 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630141 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630170 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630192 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630203 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630163 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:18:53.630122382 +0000 UTC m=+35.407504189 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630251 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630269 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:53.630251836 +0000 UTC m=+35.407633593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630386 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:53.630360589 +0000 UTC m=+35.407742426 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630410 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:53.63040206 +0000 UTC m=+35.407783947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630455 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630506 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630525 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.630623 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:53.630600165 +0000 UTC m=+35.407981922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.680155 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.680208 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.680221 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.680240 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.680251 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.783043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.783125 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.783150 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.783180 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.783202 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.873614 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.873616 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.873809 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.873926 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.873617 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:45 crc kubenswrapper[5065]: E1008 13:18:45.874042 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.885801 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.885885 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.885928 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.885963 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.885989 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.989054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.989104 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.989115 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.989131 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:45 crc kubenswrapper[5065]: I1008 13:18:45.989143 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:45Z","lastTransitionTime":"2025-10-08T13:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.071027 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.091125 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.091158 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.091166 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.091179 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.091190 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.194584 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.194623 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.194635 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.194678 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.194691 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.297205 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.297251 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.297262 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.297278 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.297293 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.400174 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.400218 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.400231 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.400251 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.400265 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.502599 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.502642 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.502653 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.502685 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.502697 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.604970 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.605008 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.605018 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.605035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.605044 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.708552 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.708618 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.708628 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.708647 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.708657 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.811588 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.811627 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.811642 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.811662 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.811675 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.914349 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.914394 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.914443 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.914493 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:46 crc kubenswrapper[5065]: I1008 13:18:46.914511 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:46Z","lastTransitionTime":"2025-10-08T13:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.016691 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.016733 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.016742 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.016757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.016767 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.074641 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.119891 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.119943 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.119953 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.119968 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.119980 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.223486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.223536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.223545 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.223564 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.223573 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.327336 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.327411 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.327715 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.327747 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.327770 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.430351 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.430389 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.430399 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.430431 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.430444 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.533129 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.533223 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.533246 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.533277 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.533298 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.635944 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.635988 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.636000 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.636016 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.636027 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.738951 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.739004 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.739017 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.739038 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.739053 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.841344 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.841445 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.841460 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.841478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.841488 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.872982 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.873042 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.873148 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:47 crc kubenswrapper[5065]: E1008 13:18:47.873298 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:47 crc kubenswrapper[5065]: E1008 13:18:47.873443 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:47 crc kubenswrapper[5065]: E1008 13:18:47.873527 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.944141 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.944182 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.944190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.944206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:47 crc kubenswrapper[5065]: I1008 13:18:47.944215 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:47Z","lastTransitionTime":"2025-10-08T13:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.046722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.046774 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.046787 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.046805 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.046814 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.150262 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.150317 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.150341 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.150371 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.150392 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.254163 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.254530 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.254541 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.254561 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.254572 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.357133 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.357177 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.357185 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.357200 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.357208 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.459718 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.459784 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.459800 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.459824 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.459840 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.562102 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.562153 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.562173 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.562190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.562202 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.665137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.665212 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.665233 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.665262 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.665283 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.768226 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.768288 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.768310 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.768340 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.768362 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.871065 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.871175 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.871194 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.871221 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.871240 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.893030 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.912385 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.935561 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.950315 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.964771 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.974754 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.974786 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.974794 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.974810 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.974861 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:48Z","lastTransitionTime":"2025-10-08T13:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.980315 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:48 crc kubenswrapper[5065]: I1008 13:18:48.992285 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.004834 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.024175 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.036691 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.051525 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.064226 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.077638 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.078082 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.078122 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.078132 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.078151 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.078161 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.083073 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/0.log" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.086784 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f" exitCode=1 Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.086856 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.087889 5065 scope.go:117] "RemoveContainer" containerID="2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.090538 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.105682 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.123277 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.135768 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.150871 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.171256 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.181219 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.181257 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.181271 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.181291 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.181307 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.188629 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.201962 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.217035 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.229156 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.242056 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.254381 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.272750 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:48Z\\\",\\\"message\\\":\\\".224510 6403 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1008 13:18:48.224538 6403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 13:18:48.224548 6403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 13:18:48.224903 6403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 13:18:48.224921 6403 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 13:18:48.224943 6403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:48.224948 6403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:48.224960 6403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 13:18:48.224976 6403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 13:18:48.224977 6403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 13:18:48.224981 6403 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:48.224992 6403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 13:18:48.224996 6403 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:48.224994 6403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1008 13:18:48.225011 6403 factory.go:656] Stopping watch factory\\\\nI1008 13:18:48.225025 6403 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.283383 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.283403 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.283427 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.283443 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.283454 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.283569 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.299532 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.312764 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.330728 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.385595 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.385628 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.385636 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.385650 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.385658 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.490325 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.490459 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.490494 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.490536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.490578 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.593743 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.593786 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.593794 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.593809 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.593817 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.696282 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.696331 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.696346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.696368 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.696386 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.798128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.798177 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.798186 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.798199 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.798207 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.873552 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:49 crc kubenswrapper[5065]: E1008 13:18:49.873927 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.873629 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:49 crc kubenswrapper[5065]: E1008 13:18:49.874180 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.873626 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:49 crc kubenswrapper[5065]: E1008 13:18:49.874393 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.929220 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.929250 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.929259 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.929272 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:49 crc kubenswrapper[5065]: I1008 13:18:49.929281 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:49Z","lastTransitionTime":"2025-10-08T13:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.032887 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.032939 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.032950 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.032967 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.032980 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.091742 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/0.log" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.093927 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.094069 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.107542 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.115834 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.130253 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.135363 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.135462 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.135487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.135513 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.135531 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.146364 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.167525 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:48Z\\\",\\\"message\\\":\\\".224510 6403 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1008 13:18:48.224538 6403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 13:18:48.224548 6403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 13:18:48.224903 6403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 13:18:48.224921 6403 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 13:18:48.224943 6403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:48.224948 6403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:48.224960 6403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 13:18:48.224976 6403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 13:18:48.224977 6403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 13:18:48.224981 6403 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:48.224992 6403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 13:18:48.224996 6403 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:48.224994 6403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1008 13:18:48.225011 6403 factory.go:656] Stopping watch factory\\\\nI1008 13:18:48.225025 6403 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.181864 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.198538 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.220516 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.232085 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.238742 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.238791 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.238805 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.238827 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.238846 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.244695 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.255264 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.266394 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.277088 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.287644 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.297956 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.340849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.340882 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.340890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.340902 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.340912 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.417659 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8"] Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.418234 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: W1008 13:18:50.420007 5065 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Oct 08 13:18:50 crc kubenswrapper[5065]: E1008 13:18:50.420080 5065 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 13:18:50 crc kubenswrapper[5065]: W1008 13:18:50.420167 5065 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: secrets "ovn-control-plane-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Oct 08 13:18:50 crc kubenswrapper[5065]: E1008 13:18:50.420190 5065 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-control-plane-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.432039 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.441945 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.443334 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.443364 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.443377 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.443394 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.443407 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.459164 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.472482 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.485171 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.485233 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.485273 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnhx\" (UniqueName: \"kubernetes.io/projected/7bb62c5d-316d-4a3c-95ff-7b1de710d481-kube-api-access-9wnhx\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.485297 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7bb62c5d-316d-4a3c-95ff-7b1de710d481-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.486889 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.498102 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.519250 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:48Z\\\",\\\"message\\\":\\\".224510 6403 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1008 13:18:48.224538 6403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 13:18:48.224548 6403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 13:18:48.224903 6403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 13:18:48.224921 6403 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 13:18:48.224943 6403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:48.224948 6403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:48.224960 6403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 13:18:48.224976 6403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 13:18:48.224977 6403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 13:18:48.224981 6403 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:48.224992 6403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 13:18:48.224996 6403 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:48.224994 6403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1008 13:18:48.225011 6403 factory.go:656] Stopping watch factory\\\\nI1008 13:18:48.225025 6403 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.531805 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.545930 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.545998 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.546021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.546053 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.546076 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.548758 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.561262 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.572936 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.582794 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.585822 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.585928 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.585985 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnhx\" (UniqueName: \"kubernetes.io/projected/7bb62c5d-316d-4a3c-95ff-7b1de710d481-kube-api-access-9wnhx\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.586023 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7bb62c5d-316d-4a3c-95ff-7b1de710d481-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.586814 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7bb62c5d-316d-4a3c-95ff-7b1de710d481-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.586879 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.597676 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.610450 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnhx\" (UniqueName: \"kubernetes.io/projected/7bb62c5d-316d-4a3c-95ff-7b1de710d481-kube-api-access-9wnhx\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.615548 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.631216 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.648928 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.648985 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.648999 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.649019 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.649034 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.654139 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:50Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.751206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.751260 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.751271 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.751287 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.751297 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.853533 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.853600 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.853617 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.853644 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.853661 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.957159 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.957197 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.957206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.957223 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:50 crc kubenswrapper[5065]: I1008 13:18:50.957234 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:50Z","lastTransitionTime":"2025-10-08T13:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.059937 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.060028 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.060074 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.060093 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.060104 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.097721 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/1.log" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.098598 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/0.log" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.101903 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992" exitCode=1 Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.101970 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.102160 5065 scope.go:117] "RemoveContainer" containerID="2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.103220 5065 scope.go:117] "RemoveContainer" containerID="c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992" Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.103478 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.115291 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.126542 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.138137 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.161500 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:48Z\\\",\\\"message\\\":\\\".224510 6403 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1008 13:18:48.224538 6403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 13:18:48.224548 6403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 13:18:48.224903 6403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 13:18:48.224921 6403 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 13:18:48.224943 6403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:48.224948 6403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:48.224960 6403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 13:18:48.224976 6403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 13:18:48.224977 6403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 13:18:48.224981 6403 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:48.224992 6403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 13:18:48.224996 6403 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:48.224994 6403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1008 13:18:48.225011 6403 factory.go:656] Stopping watch factory\\\\nI1008 13:18:48.225025 6403 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.162710 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.162749 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.162757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.162774 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.162783 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.178002 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.193988 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.207770 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.221430 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.231821 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.246330 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.261155 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.264862 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.264905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.264919 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.264940 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.264955 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.273621 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.291790 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.303879 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.316151 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.329738 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.367095 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.367423 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.367506 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.367601 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.367686 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.469958 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.469992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.470000 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.470013 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.470021 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.517348 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6nwh2"] Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.517783 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.517841 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.527668 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.537988 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.551435 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.560955 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.572648 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.572682 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.572692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.572711 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.572722 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.576839 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.586736 5065 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.586801 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovn-control-plane-metrics-cert podName:7bb62c5d-316d-4a3c-95ff-7b1de710d481 nodeName:}" failed. No retries permitted until 2025-10-08 13:18:52.086783148 +0000 UTC m=+33.864164905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-749d76644c-mzjf8" (UID: "7bb62c5d-316d-4a3c-95ff-7b1de710d481") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.586803 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.595998 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.596071 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfvc\" (UniqueName: \"kubernetes.io/projected/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-kube-api-access-gvfvc\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.597754 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.611534 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.627563 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:48Z\\\",\\\"message\\\":\\\".224510 6403 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1008 13:18:48.224538 6403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 13:18:48.224548 6403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 13:18:48.224903 6403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 13:18:48.224921 6403 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 13:18:48.224943 6403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:48.224948 6403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:48.224960 6403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 13:18:48.224976 6403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 13:18:48.224977 6403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 13:18:48.224981 6403 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:48.224992 6403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 13:18:48.224996 6403 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:48.224994 6403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1008 13:18:48.225011 6403 factory.go:656] Stopping watch factory\\\\nI1008 13:18:48.225025 6403 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.638798 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.640504 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.658435 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.669535 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.674504 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.674533 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.674544 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.674560 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.674571 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.682439 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.697141 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.697244 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvfvc\" (UniqueName: \"kubernetes.io/projected/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-kube-api-access-gvfvc\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.697563 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.697622 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:18:52.197603516 +0000 UTC m=+33.974985273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.698133 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.712062 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvfvc\" (UniqueName: \"kubernetes.io/projected/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-kube-api-access-gvfvc\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.711997 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.725021 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.735332 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:51Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.755818 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.776921 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.776959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.776969 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.776982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.776992 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.873600 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.873662 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.873646 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.873776 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.873848 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:51 crc kubenswrapper[5065]: E1008 13:18:51.873904 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.879761 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.879818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.879847 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.879872 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.879886 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.981952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.982001 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.982011 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.982026 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:51 crc kubenswrapper[5065]: I1008 13:18:51.982037 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:51Z","lastTransitionTime":"2025-10-08T13:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.085072 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.085110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.085122 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.085140 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.085152 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.101656 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.108065 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7bb62c5d-316d-4a3c-95ff-7b1de710d481-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mzjf8\" (UID: \"7bb62c5d-316d-4a3c-95ff-7b1de710d481\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.108667 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/1.log" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.188457 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.188922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.188952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.188974 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.188987 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.202207 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:52 crc kubenswrapper[5065]: E1008 13:18:52.202641 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:52 crc kubenswrapper[5065]: E1008 13:18:52.202862 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:18:53.202836214 +0000 UTC m=+34.980217981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.233208 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.291899 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.291932 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.291941 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.291954 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.291962 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.396554 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.396586 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.396596 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.396608 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.396618 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.498590 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.498624 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.498635 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.498651 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.498662 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.601577 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.601626 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.601641 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.601661 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.601676 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.704623 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.704660 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.704668 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.704685 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.704695 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.807070 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.807106 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.807119 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.807138 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.807150 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.873150 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:52 crc kubenswrapper[5065]: E1008 13:18:52.873325 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.909544 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.909583 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.909593 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.909609 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:52 crc kubenswrapper[5065]: I1008 13:18:52.909620 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:52Z","lastTransitionTime":"2025-10-08T13:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.012364 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.012450 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.012464 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.012495 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.012510 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.115200 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.115244 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.115255 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.115272 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.115283 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.118218 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" event={"ID":"7bb62c5d-316d-4a3c-95ff-7b1de710d481","Type":"ContainerStarted","Data":"6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.118289 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" event={"ID":"7bb62c5d-316d-4a3c-95ff-7b1de710d481","Type":"ContainerStarted","Data":"0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.118308 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" event={"ID":"7bb62c5d-316d-4a3c-95ff-7b1de710d481","Type":"ContainerStarted","Data":"63e4f264525880ac3a8686a18103fefc3263666ae5cd48f7a4eb9000d6e022cc"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.140265 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.157752 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.174328 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.191493 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.204155 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.211047 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.211487 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.211603 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:18:55.211574774 +0000 UTC m=+36.988956541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.217900 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.217965 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.217992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.218029 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.218052 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.221316 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.235400 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.254439 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:48Z\\\",\\\"message\\\":\\\".224510 6403 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1008 13:18:48.224538 6403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 13:18:48.224548 6403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 13:18:48.224903 6403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 13:18:48.224921 6403 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 13:18:48.224943 6403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:48.224948 6403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:48.224960 6403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 13:18:48.224976 6403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 13:18:48.224977 6403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 13:18:48.224981 6403 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:48.224992 6403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 13:18:48.224996 6403 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:48.224994 6403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1008 13:18:48.225011 6403 factory.go:656] Stopping watch factory\\\\nI1008 13:18:48.225025 6403 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.273543 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.296769 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.311243 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.320506 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.320550 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.320561 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.320578 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.320589 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.327268 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.338946 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.389033 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.401775 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.416116 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.423264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.423306 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.423324 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.423344 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.423356 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.440207 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.526949 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.527002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.527014 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.527064 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.527079 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.629637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.629688 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.629701 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.629722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.629744 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.715575 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.715665 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.715694 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.715730 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.715751 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.715844 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:19:09.715815076 +0000 UTC m=+51.493196833 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.715878 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.715893 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.715905 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.715948 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:09.715932939 +0000 UTC m=+51.493314696 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.715958 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.715994 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.716001 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:09.715990171 +0000 UTC m=+51.493371928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.716003 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.716020 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.716043 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:09.716037842 +0000 UTC m=+51.493419599 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.716072 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.716092 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:09.716085903 +0000 UTC m=+51.493467660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.732169 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.732220 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.732229 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.732245 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.732254 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.835179 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.835234 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.835250 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.835266 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.835640 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.873152 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.873213 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.873278 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.873324 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.873490 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.873651 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.938075 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.938122 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.938137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.938158 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.938173 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.983243 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.983303 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.983324 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.983353 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:53 crc kubenswrapper[5065]: I1008 13:18:53.983375 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:53Z","lastTransitionTime":"2025-10-08T13:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:53 crc kubenswrapper[5065]: E1008 13:18:53.999299 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:53Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.003864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.003933 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.003944 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.003959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.003968 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: E1008 13:18:54.019234 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.023854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.023892 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.023903 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.023922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.023934 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: E1008 13:18:54.038003 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.042873 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.042966 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.042982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.043005 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.043022 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: E1008 13:18:54.058144 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.062938 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.062992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.063009 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.063034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.063051 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: E1008 13:18:54.078027 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: E1008 13:18:54.078192 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.079708 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.079755 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.079770 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.079794 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.079811 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.183345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.183398 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.183428 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.183451 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.183467 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.279878 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.286079 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.286127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.286139 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.286158 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.286171 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.296306 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.315079 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.328955 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.341332 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.358367 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.370081 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.379831 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.388848 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.388892 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.388905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.388940 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.388950 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.391389 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.402596 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.420009 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a2ccac3c9f5cf3c8c0365dc48d3055446e706f0081889b3561eb58ccf60376f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:48Z\\\",\\\"message\\\":\\\".224510 6403 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1008 13:18:48.224538 6403 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 13:18:48.224548 6403 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 13:18:48.224903 6403 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 13:18:48.224921 6403 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 13:18:48.224943 6403 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:48.224948 6403 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:48.224960 6403 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 13:18:48.224976 6403 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 13:18:48.224977 6403 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 13:18:48.224981 6403 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:48.224992 6403 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 13:18:48.224996 6403 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:48.224994 6403 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1008 13:18:48.225011 6403 factory.go:656] Stopping watch factory\\\\nI1008 13:18:48.225025 6403 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.435776 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.448644 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.460681 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.471811 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.485204 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.491773 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.491815 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.491829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.491854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.491866 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.499947 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.514347 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:54Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.595185 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.595225 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.595235 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.595253 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.595263 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.698442 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.698476 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.698486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.698502 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.698532 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.800904 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.800955 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.800977 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.801004 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.801026 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.873045 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:54 crc kubenswrapper[5065]: E1008 13:18:54.873247 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.903241 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.903307 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.903345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.903379 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:54 crc kubenswrapper[5065]: I1008 13:18:54.903400 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:54Z","lastTransitionTime":"2025-10-08T13:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.006255 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.006306 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.006319 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.006335 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.006346 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.110064 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.110107 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.110118 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.110135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.110148 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.212345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.212446 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.212459 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.212483 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.212495 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.231202 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:55 crc kubenswrapper[5065]: E1008 13:18:55.231374 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:55 crc kubenswrapper[5065]: E1008 13:18:55.231455 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:18:59.231434668 +0000 UTC m=+41.008816425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.315824 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.315894 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.315907 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.315928 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.315941 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.419058 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.419096 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.419104 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.419120 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.419129 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.522164 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.522219 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.522235 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.522256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.522271 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.624475 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.624540 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.624551 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.624568 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.624580 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.738772 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.738843 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.738854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.739101 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.739136 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.842307 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.842360 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.842376 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.842397 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.842433 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.873282 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:55 crc kubenswrapper[5065]: E1008 13:18:55.873473 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.873307 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.873284 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:55 crc kubenswrapper[5065]: E1008 13:18:55.873555 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:55 crc kubenswrapper[5065]: E1008 13:18:55.873805 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.945154 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.945193 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.945203 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.945218 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:55 crc kubenswrapper[5065]: I1008 13:18:55.945230 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:55Z","lastTransitionTime":"2025-10-08T13:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.051012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.051078 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.051094 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.051111 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.051121 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.153303 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.153347 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.153357 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.153370 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.153378 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.256256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.256338 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.256357 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.256378 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.256393 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.359279 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.359328 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.359340 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.359358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.359370 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.462708 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.462789 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.462814 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.462847 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.462869 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.566209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.566254 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.566265 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.566284 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.566296 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.668900 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.668951 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.668962 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.668983 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.668995 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.771692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.771757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.771818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.771840 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.771856 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.873220 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:56 crc kubenswrapper[5065]: E1008 13:18:56.873358 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.874824 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.874858 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.874866 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.874886 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.874905 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.977676 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.977722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.977734 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.977752 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:56 crc kubenswrapper[5065]: I1008 13:18:56.977763 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:56Z","lastTransitionTime":"2025-10-08T13:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.079735 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.079771 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.079780 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.079793 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.079802 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.182733 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.182771 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.182779 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.182791 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.182800 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.285496 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.285539 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.285549 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.285566 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.285575 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.351281 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.352132 5065 scope.go:117] "RemoveContainer" containerID="c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992" Oct 08 13:18:57 crc kubenswrapper[5065]: E1008 13:18:57.352290 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.365408 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.378509 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.387926 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.387993 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.388004 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.388019 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.388029 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.389293 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.399935 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.412719 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.425098 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.438576 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.451666 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.473080 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.487384 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.492643 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.492687 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.492699 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.492742 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.492756 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.505958 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.516156 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.527661 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.543228 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.563249 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.584547 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.594951 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.595001 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.595012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.595030 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.595042 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.606684 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:57Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.697239 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.697281 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.697293 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.697309 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.697321 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.799439 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.799509 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.799518 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.799532 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.799541 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.873400 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:57 crc kubenswrapper[5065]: E1008 13:18:57.873588 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.873699 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.873712 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:57 crc kubenswrapper[5065]: E1008 13:18:57.873936 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:57 crc kubenswrapper[5065]: E1008 13:18:57.873978 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.901940 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.901992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.902006 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.902025 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:57 crc kubenswrapper[5065]: I1008 13:18:57.902038 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:57Z","lastTransitionTime":"2025-10-08T13:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.005232 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.005304 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.005319 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.005346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.005366 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.108211 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.108262 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.108273 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.108290 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.108301 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.211146 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.211188 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.211198 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.211213 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.211224 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.314117 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.314179 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.314189 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.314206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.314219 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.416982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.417081 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.417101 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.417129 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.417147 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.520818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.521319 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.521332 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.521352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.521362 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.624079 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.624160 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.624180 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.624210 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.624234 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.727590 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.727633 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.727641 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.727662 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.727676 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.829811 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.829865 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.829883 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.829906 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.829921 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.872637 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:58 crc kubenswrapper[5065]: E1008 13:18:58.872886 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.885227 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.900247 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.913185 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.933692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.933743 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.933757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.933776 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.933789 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:58Z","lastTransitionTime":"2025-10-08T13:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.936472 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.950814 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.963765 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.973859 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:58 crc kubenswrapper[5065]: I1008 13:18:58.986695 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.000749 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.014181 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.027487 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.035943 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.035993 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.036004 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.036023 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.036036 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.053471 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.067925 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.080668 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.090279 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.103598 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.114819 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:18:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.138658 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.138712 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.138723 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.138745 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.138757 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.241760 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.241851 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.241869 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.241892 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.241908 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.276646 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:18:59 crc kubenswrapper[5065]: E1008 13:18:59.276883 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:59 crc kubenswrapper[5065]: E1008 13:18:59.276987 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:19:07.276958382 +0000 UTC m=+49.054340179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.344751 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.344819 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.344839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.344865 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.344882 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.448361 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.448460 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.448484 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.448513 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.448540 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.551564 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.551608 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.551618 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.551634 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.551645 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.654477 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.654530 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.654548 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.654569 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.654584 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.757375 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.757453 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.757465 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.757481 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.757492 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.860329 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.860406 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.860462 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.860492 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.860512 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.872526 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.872548 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.872573 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:18:59 crc kubenswrapper[5065]: E1008 13:18:59.872655 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:18:59 crc kubenswrapper[5065]: E1008 13:18:59.872809 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:18:59 crc kubenswrapper[5065]: E1008 13:18:59.872937 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.963297 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.963377 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.963401 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.963469 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:18:59 crc kubenswrapper[5065]: I1008 13:18:59.963493 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:18:59Z","lastTransitionTime":"2025-10-08T13:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.067162 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.067238 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.067262 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.067292 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.067315 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.170541 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.170580 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.170592 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.170608 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.170619 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.273573 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.273645 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.273655 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.273671 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.273680 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.376884 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.376964 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.376991 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.377014 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.377028 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.480365 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.480497 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.480521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.480546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.480563 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.583991 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.584042 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.584053 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.584071 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.584083 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.686433 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.686477 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.686488 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.686503 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.686513 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.789821 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.789872 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.789881 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.789896 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.789906 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.872770 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:00 crc kubenswrapper[5065]: E1008 13:19:00.872925 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.892090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.892146 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.892161 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.892177 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.892189 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.996196 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.996244 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.996253 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.996269 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:00 crc kubenswrapper[5065]: I1008 13:19:00.996279 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:00Z","lastTransitionTime":"2025-10-08T13:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.098780 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.098846 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.098864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.098890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.098910 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.202346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.202437 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.202456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.202476 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.202487 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.305980 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.306054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.306074 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.306103 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.306121 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.408661 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.408742 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.408764 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.408794 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.408816 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.511148 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.511224 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.511242 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.511268 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.511284 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.614969 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.615066 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.615095 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.615128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.615145 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.718443 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.718491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.718501 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.718517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.718528 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.822007 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.822068 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.822085 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.822108 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.822126 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.873182 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.873281 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:01 crc kubenswrapper[5065]: E1008 13:19:01.873319 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.873331 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:01 crc kubenswrapper[5065]: E1008 13:19:01.873580 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:01 crc kubenswrapper[5065]: E1008 13:19:01.873701 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.924449 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.924502 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.924513 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.924529 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:01 crc kubenswrapper[5065]: I1008 13:19:01.924540 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:01Z","lastTransitionTime":"2025-10-08T13:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.027451 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.027511 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.027522 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.027540 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.027552 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.129791 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.129849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.129867 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.129890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.129907 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.233659 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.233711 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.233723 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.233740 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.233751 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.336063 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.336103 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.336114 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.336133 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.336146 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.438399 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.438489 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.438505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.438535 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.438558 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.541644 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.541711 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.541723 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.541743 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.541755 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.644722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.644787 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.644810 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.644841 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.644866 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.747534 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.747596 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.747614 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.747638 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.747659 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.851156 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.851206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.851217 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.851236 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.851248 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.872885 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:02 crc kubenswrapper[5065]: E1008 13:19:02.873215 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.954770 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.954868 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.954882 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.954909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:02 crc kubenswrapper[5065]: I1008 13:19:02.954922 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:02Z","lastTransitionTime":"2025-10-08T13:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.058730 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.058811 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.058829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.058860 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.058879 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.161671 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.161737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.161757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.161781 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.161802 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.265066 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.265203 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.265282 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.265313 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.265367 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.368583 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.368909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.369045 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.369156 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.369240 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.473605 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.474002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.474162 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.474336 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.474497 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.577761 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.577813 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.577827 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.577849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.577863 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.681308 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.681363 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.681373 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.681395 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.681408 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.785704 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.786297 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.786313 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.786334 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.786349 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.872838 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.872849 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.872923 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:03 crc kubenswrapper[5065]: E1008 13:19:03.873310 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:03 crc kubenswrapper[5065]: E1008 13:19:03.873171 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:03 crc kubenswrapper[5065]: E1008 13:19:03.873454 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.888836 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.888877 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.888887 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.888904 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.888916 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.991689 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.991765 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.991789 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.991819 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:03 crc kubenswrapper[5065]: I1008 13:19:03.991842 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:03Z","lastTransitionTime":"2025-10-08T13:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.095521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.095602 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.095625 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.095654 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.095675 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.165400 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.165488 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.165509 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.165547 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.165568 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: E1008 13:19:04.184010 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:04Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.187891 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.187911 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.187919 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.187935 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.187947 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: E1008 13:19:04.199971 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:04Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.203060 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.203115 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.203126 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.203150 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.203165 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: E1008 13:19:04.216087 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:04Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.219572 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.219604 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.219611 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.219624 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.219634 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: E1008 13:19:04.232739 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:04Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.236210 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.236236 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.236244 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.236261 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.236269 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: E1008 13:19:04.249267 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:04Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:04 crc kubenswrapper[5065]: E1008 13:19:04.249451 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.250799 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.250837 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.250846 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.250862 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.250873 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.353146 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.353194 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.353212 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.353290 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.353377 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.456214 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.456264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.456279 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.456301 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.456316 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.559065 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.559135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.559155 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.559183 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.559203 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.662145 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.662237 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.662253 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.662274 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.662288 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.765921 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.765978 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.765991 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.766012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.766024 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.868508 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.868559 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.868575 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.868598 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.868612 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.873111 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:04 crc kubenswrapper[5065]: E1008 13:19:04.873352 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.971389 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.971455 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.971467 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.971484 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:04 crc kubenswrapper[5065]: I1008 13:19:04.971496 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:04Z","lastTransitionTime":"2025-10-08T13:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.074967 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.075400 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.075637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.075839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.076038 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.179465 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.179495 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.179503 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.179518 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.179526 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.282917 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.282952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.282961 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.282977 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.282988 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.386323 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.386452 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.386493 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.386525 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.386550 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.489607 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.489951 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.490043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.490128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.490213 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.593104 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.593381 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.593513 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.593581 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.593645 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.696819 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.696864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.696874 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.696890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.696901 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.799165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.799205 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.799216 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.799232 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.799243 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.873320 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.873361 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:05 crc kubenswrapper[5065]: E1008 13:19:05.873494 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:05 crc kubenswrapper[5065]: E1008 13:19:05.873637 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.873717 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:05 crc kubenswrapper[5065]: E1008 13:19:05.873892 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.902467 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.902509 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.902520 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.902536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:05 crc kubenswrapper[5065]: I1008 13:19:05.902548 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:05Z","lastTransitionTime":"2025-10-08T13:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.005708 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.005782 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.005804 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.005834 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.005857 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.108805 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.108849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.108864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.108883 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.108894 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.210869 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.210899 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.210907 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.210922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.210932 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.313816 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.313865 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.313893 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.313915 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.313928 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.415950 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.415981 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.415990 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.416004 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.416013 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.518750 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.518803 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.518817 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.518836 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.518854 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.621201 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.621263 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.621276 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.621296 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.621309 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.724661 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.724714 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.724725 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.724767 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.724780 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.827944 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.827990 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.828003 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.828021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.828032 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.873210 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:06 crc kubenswrapper[5065]: E1008 13:19:06.873402 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.930533 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.930592 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.930604 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.930622 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:06 crc kubenswrapper[5065]: I1008 13:19:06.930632 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:06Z","lastTransitionTime":"2025-10-08T13:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.033444 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.033498 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.033509 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.033526 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.033539 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.135766 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.135815 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.135828 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.135845 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.135857 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.237456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.237497 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.237507 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.237524 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.237535 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.339740 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.339806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.339817 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.339839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.339860 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.370565 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:07 crc kubenswrapper[5065]: E1008 13:19:07.370864 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:19:07 crc kubenswrapper[5065]: E1008 13:19:07.371003 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:19:23.370974561 +0000 UTC m=+65.148356508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.442345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.442405 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.442454 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.442475 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.442488 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.546112 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.546193 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.546217 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.546248 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.546271 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.649295 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.649502 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.649521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.649542 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.649557 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.752180 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.752223 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.752231 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.752247 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.752256 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.854139 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.854174 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.854184 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.854200 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.854210 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.872673 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.872672 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:07 crc kubenswrapper[5065]: E1008 13:19:07.872817 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:07 crc kubenswrapper[5065]: E1008 13:19:07.872852 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.872672 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:07 crc kubenswrapper[5065]: E1008 13:19:07.872911 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.956002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.956040 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.956051 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.956065 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:07 crc kubenswrapper[5065]: I1008 13:19:07.956076 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:07Z","lastTransitionTime":"2025-10-08T13:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.058927 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.058981 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.058992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.059011 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.059022 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.161906 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.162034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.162135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.162188 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.162213 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.286481 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.286546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.286563 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.286588 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.286604 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.389034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.389108 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.389127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.389150 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.389169 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.492289 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.492351 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.492365 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.492382 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.492391 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.595488 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.595576 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.595613 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.595646 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.595669 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.698075 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.698119 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.698130 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.698148 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.698161 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.800531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.800578 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.800589 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.800609 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.800621 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.873016 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:08 crc kubenswrapper[5065]: E1008 13:19:08.873224 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.873286 5065 scope.go:117] "RemoveContainer" containerID="c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.888950 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.904844 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.904896 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.904906 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.904921 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.904930 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:08Z","lastTransitionTime":"2025-10-08T13:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.912679 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.927885 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.940648 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.959466 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.971575 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.980999 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:08 crc kubenswrapper[5065]: I1008 13:19:08.993571 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:08Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.002759 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.008642 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.008674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.008684 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.008701 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.008714 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.020930 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.033224 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.049039 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.058953 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.071290 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.080074 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.090716 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.101724 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.110677 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.110722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.110737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.110759 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.110776 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.170739 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/1.log" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.173387 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.173860 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.187490 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.199997 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.212973 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.213012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.213025 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.213043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.213057 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.221886 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.237321 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.254827 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.272694 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.285311 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.295386 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.308942 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.315599 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.315629 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.315638 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.315653 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.315664 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.322748 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.334298 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.352258 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.363798 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.376581 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.389799 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.406528 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.416577 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:09Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.417982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.418008 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.418019 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.418034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.418044 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.520239 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.520330 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.520353 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.520383 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.520405 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.623206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.623533 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.623609 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.623689 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.623754 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.727034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.727106 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.727132 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.727164 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.727187 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.797514 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797631 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:19:41.797614787 +0000 UTC m=+83.574996544 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.797662 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.797689 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.797718 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.797741 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797844 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797848 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797889 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797896 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797906 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797918 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:41.797899485 +0000 UTC m=+83.575281252 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797942 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:41.797931686 +0000 UTC m=+83.575313453 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797947 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797969 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:41.797961817 +0000 UTC m=+83.575343574 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797858 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797980 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.797997 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:19:41.797992468 +0000 UTC m=+83.575374225 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.829363 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.829447 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.829463 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.829481 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.829494 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.873568 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.873634 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.873773 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.874117 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.874306 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:09 crc kubenswrapper[5065]: E1008 13:19:09.874631 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.932466 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.932517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.932531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.932550 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:09 crc kubenswrapper[5065]: I1008 13:19:09.932563 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:09Z","lastTransitionTime":"2025-10-08T13:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.035867 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.035946 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.035972 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.036002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.036025 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.138330 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.138371 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.138385 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.138407 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.138444 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.179585 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/2.log" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.180592 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/1.log" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.184363 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f" exitCode=1 Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.184410 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.184489 5065 scope.go:117] "RemoveContainer" containerID="c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.185688 5065 scope.go:117] "RemoveContainer" containerID="147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f" Oct 08 13:19:10 crc kubenswrapper[5065]: E1008 13:19:10.186143 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.210538 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.228393 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.238215 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.241444 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.241488 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.241498 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.241518 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.241531 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.249558 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.261856 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.281267 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.291890 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.304559 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.315268 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.326132 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.335895 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.344180 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.344229 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.344241 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.344256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.344268 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.353052 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c3e97969ff195b1adcb26ebbed962de5826307cb190d6b701aca6a0979b992\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"message\\\":\\\":default/a8519615025667110816) with []\\\\nI1008 13:18:50.005859 6545 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1008 13:18:50.005950 6545 factory.go:1336] Added *v1.Node event handler 7\\\\nI1008 13:18:50.006003 6545 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 13:18:50.006017 6545 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006020 6545 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 13:18:50.006069 6545 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 13:18:50.006122 6545 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 13:18:50.006158 6545 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 13:18:50.006174 6545 factory.go:656] Stopping watch factory\\\\nI1008 13:18:50.006203 6545 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 13:18:50.006559 6545 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1008 13:18:50.006673 6545 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 13:18:50.006729 6545 ovnkube.go:599] Stopped ovnkube\\\\nI1008 13:18:50.006776 6545 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:18:50.006947 6545 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.363305 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.379523 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.388464 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.402095 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.416817 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:10Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.446625 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.446658 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.446668 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.446683 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.446694 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.549283 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.549319 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.549328 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.549343 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.549355 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.651752 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.651841 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.651875 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.651919 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.651943 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.755391 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.755471 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.755487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.755511 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.755527 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.858403 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.858483 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.858504 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.858526 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.858543 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.873777 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:10 crc kubenswrapper[5065]: E1008 13:19:10.873951 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.961361 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.961478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.961508 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.961540 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:10 crc kubenswrapper[5065]: I1008 13:19:10.961561 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:10Z","lastTransitionTime":"2025-10-08T13:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.064331 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.064369 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.064381 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.064398 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.064411 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.167722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.167802 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.167842 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.167866 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.167883 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.190921 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/2.log" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.195675 5065 scope.go:117] "RemoveContainer" containerID="147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f" Oct 08 13:19:11 crc kubenswrapper[5065]: E1008 13:19:11.196062 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.208708 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.221599 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.236054 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.248021 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.265551 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.269813 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.269871 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.269885 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.269903 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.269914 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.279827 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.294864 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.304580 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.323661 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.336721 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.349040 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.359337 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.370717 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.372034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.372082 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.372097 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.372118 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.372133 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.383673 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.393132 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.402123 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.410477 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:11Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.478893 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.478942 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.478953 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.478975 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.478986 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.586231 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.586271 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.586284 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.586309 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.586322 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.689614 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.689663 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.689674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.689692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.689704 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.792941 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.792979 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.792990 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.793009 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.793020 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.873371 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.873409 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.873473 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:11 crc kubenswrapper[5065]: E1008 13:19:11.873634 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:11 crc kubenswrapper[5065]: E1008 13:19:11.873827 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:11 crc kubenswrapper[5065]: E1008 13:19:11.873910 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.896273 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.896320 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.896329 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.896346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.896356 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.999322 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.999386 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.999403 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.999456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:11 crc kubenswrapper[5065]: I1008 13:19:11.999474 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:11Z","lastTransitionTime":"2025-10-08T13:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.102527 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.102619 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.102644 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.102675 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.102699 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.204864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.204909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.204920 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.204939 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.204951 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.307358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.307488 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.307564 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.307597 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.307636 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.409893 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.409945 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.409959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.409978 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.409988 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.513111 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.513187 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.513210 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.513241 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.513264 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.616472 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.616512 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.616528 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.616547 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.616565 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.718878 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.718924 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.718935 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.718952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.718961 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.820871 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.820900 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.820909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.820923 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.820934 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.873168 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:12 crc kubenswrapper[5065]: E1008 13:19:12.873304 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.923302 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.923395 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.923513 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.923541 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:12 crc kubenswrapper[5065]: I1008 13:19:12.923563 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:12Z","lastTransitionTime":"2025-10-08T13:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.025445 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.025482 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.025492 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.025517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.025528 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.128322 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.128372 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.128383 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.128401 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.128433 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.230974 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.231063 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.231074 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.231094 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.231109 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.334072 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.334160 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.334189 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.334219 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.334240 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.436994 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.437040 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.437051 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.437069 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.437082 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.538636 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.538679 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.538691 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.538708 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.538721 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.640874 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.640930 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.640945 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.640966 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.640980 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.743290 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.743342 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.743355 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.743374 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.743384 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.845473 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.845521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.845532 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.845551 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.845560 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.872819 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.872908 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:13 crc kubenswrapper[5065]: E1008 13:19:13.872979 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.873058 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:13 crc kubenswrapper[5065]: E1008 13:19:13.873246 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:13 crc kubenswrapper[5065]: E1008 13:19:13.873341 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.950767 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.951146 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.951189 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.951302 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:13 crc kubenswrapper[5065]: I1008 13:19:13.951339 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:13Z","lastTransitionTime":"2025-10-08T13:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.055102 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.055231 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.055257 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.055292 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.055317 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.158567 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.158674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.158694 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.158730 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.158749 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.262118 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.262163 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.262171 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.262187 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.262196 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.365391 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.365567 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.365598 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.365680 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.365710 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.468749 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.468821 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.468845 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.468873 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.468892 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.570314 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.570375 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.570396 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.570464 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.570491 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: E1008 13:19:14.590532 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:14Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.595889 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.595967 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.595989 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.596023 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.596043 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: E1008 13:19:14.617374 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:14Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.623769 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.623850 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.623885 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.623915 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.623937 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: E1008 13:19:14.641733 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:14Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.646165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.646209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.646256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.646278 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.646295 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: E1008 13:19:14.662432 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:14Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.666141 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.666197 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.666209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.666226 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.666237 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: E1008 13:19:14.677141 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:14Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:14 crc kubenswrapper[5065]: E1008 13:19:14.677265 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.679093 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.679145 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.679206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.679227 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.679243 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.782041 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.782094 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.782137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.782156 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.782168 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.873500 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:14 crc kubenswrapper[5065]: E1008 13:19:14.873702 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.884011 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.884047 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.884059 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.884077 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.884088 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.987002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.987067 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.987088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.987118 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:14 crc kubenswrapper[5065]: I1008 13:19:14.987144 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:14Z","lastTransitionTime":"2025-10-08T13:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.090386 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.090447 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.090460 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.090478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.090489 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.193873 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.193931 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.193951 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.193975 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.193992 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.296759 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.296825 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.296847 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.296877 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.296900 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.398960 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.399024 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.399041 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.399067 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.399084 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.459057 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.471780 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.476645 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.497381 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.504625 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.504678 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.504691 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.504708 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.504719 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.511739 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.533871 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.550842 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.568449 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.579704 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.590467 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.599656 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.607676 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.607716 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.607726 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.607741 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.607750 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.621592 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.637541 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.663702 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.681393 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.696374 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.706241 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.709436 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.709476 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.709486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.709503 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.709513 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.716519 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.726273 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:15Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.811976 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.812013 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.812025 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.812047 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.812062 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.873459 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.873527 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:15 crc kubenswrapper[5065]: E1008 13:19:15.873560 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.873650 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:15 crc kubenswrapper[5065]: E1008 13:19:15.873679 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:15 crc kubenswrapper[5065]: E1008 13:19:15.873886 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.915741 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.915796 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.915805 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.915822 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:15 crc kubenswrapper[5065]: I1008 13:19:15.915833 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:15Z","lastTransitionTime":"2025-10-08T13:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.018693 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.018736 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.018747 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.018762 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.018771 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.121371 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.121452 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.121464 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.121480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.121491 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.223908 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.223945 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.223977 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.223996 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.224008 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.326702 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.326787 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.326813 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.326847 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.326872 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.429654 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.429712 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.429727 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.429749 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.429763 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.532862 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.532905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.532914 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.532934 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.532944 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.634896 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.634925 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.634933 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.634946 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.634955 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.738001 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.738052 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.738069 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.738095 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.738113 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.840676 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.840722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.840736 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.840757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.840770 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.873336 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:16 crc kubenswrapper[5065]: E1008 13:19:16.873535 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.943768 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.943834 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.943852 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.943880 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:16 crc kubenswrapper[5065]: I1008 13:19:16.943903 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:16Z","lastTransitionTime":"2025-10-08T13:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.046167 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.046240 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.046256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.046275 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.046288 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.148161 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.148203 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.148213 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.148229 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.148239 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.251721 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.251765 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.251775 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.251791 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.251801 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.354056 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.354110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.354165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.354190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.354206 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.457308 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.457351 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.457361 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.457378 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.457390 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.560854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.560936 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.560976 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.561009 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.561029 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.663531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.663607 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.663633 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.663664 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.663687 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.767142 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.767173 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.767183 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.767196 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.767206 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.869481 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.869521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.869531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.869545 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.869554 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.872798 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:17 crc kubenswrapper[5065]: E1008 13:19:17.872912 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.872814 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:17 crc kubenswrapper[5065]: E1008 13:19:17.872983 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.872811 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:17 crc kubenswrapper[5065]: E1008 13:19:17.873038 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.971784 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.971835 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.971854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.971876 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:17 crc kubenswrapper[5065]: I1008 13:19:17.971891 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:17Z","lastTransitionTime":"2025-10-08T13:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.074682 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.074758 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.074784 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.074810 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.074828 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.177468 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.177515 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.177525 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.177541 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.177552 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.280149 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.280257 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.280278 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.280300 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.280317 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.383246 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.383333 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.383358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.383390 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.383466 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.485772 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.485876 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.485900 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.485931 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.485953 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.588793 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.588826 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.588834 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.588848 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.588856 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.691456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.691483 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.691491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.691505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.691513 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.794002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.794046 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.794055 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.794074 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.794085 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.873081 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:18 crc kubenswrapper[5065]: E1008 13:19:18.873462 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.895959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.896008 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.896019 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.896036 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.896049 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.899594 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.911185 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.923900 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.937780 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.950360 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.967839 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.979825 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.991340 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:18Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.998534 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.998573 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.998582 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.998596 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:18 crc kubenswrapper[5065]: I1008 13:19:18.998605 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:18Z","lastTransitionTime":"2025-10-08T13:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.002406 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.012775 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.025807 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.036025 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.046606 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.058228 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.076185 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.088488 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.100956 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.100997 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.101007 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.101024 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.101036 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.102430 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.113392 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:19Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.204158 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.204190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.204198 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.204214 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.204222 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.307235 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.307632 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.307720 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.307806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.307890 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.410751 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.411018 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.411113 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.411207 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.411290 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.513523 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.513582 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.513595 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.513611 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.513623 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.615812 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.616084 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.616156 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.616228 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.616293 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.719771 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.720046 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.720144 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.720250 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.720352 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.822969 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.823639 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.823680 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.823708 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.823721 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.872892 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.872911 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.873007 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:19 crc kubenswrapper[5065]: E1008 13:19:19.873485 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:19 crc kubenswrapper[5065]: E1008 13:19:19.873249 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:19 crc kubenswrapper[5065]: E1008 13:19:19.873596 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.926095 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.926135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.926143 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.926156 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:19 crc kubenswrapper[5065]: I1008 13:19:19.926165 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:19Z","lastTransitionTime":"2025-10-08T13:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.029359 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.029447 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.029464 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.029483 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.029495 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.131886 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.132170 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.132257 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.132346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.132450 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.236611 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.236647 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.236657 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.236670 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.236679 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.338932 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.338979 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.338992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.339015 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.339026 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.440937 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.440974 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.440985 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.441002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.441013 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.543332 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.543668 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.543779 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.543879 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.543971 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.647638 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.647734 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.647750 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.647775 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.647791 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.750912 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.750982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.751003 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.751033 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.751053 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.854281 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.854348 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.854366 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.854392 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.854439 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.873787 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:20 crc kubenswrapper[5065]: E1008 13:19:20.874005 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.957830 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.957890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.957899 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.957917 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:20 crc kubenswrapper[5065]: I1008 13:19:20.957927 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:20Z","lastTransitionTime":"2025-10-08T13:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.060292 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.060361 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.060377 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.060399 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.060438 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.162903 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.163201 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.163209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.163224 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.163234 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.265719 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.265786 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.265803 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.265821 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.265832 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.368296 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.368329 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.368337 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.368352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.368360 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.471504 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.471538 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.471546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.471564 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.471596 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.574316 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.574364 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.574374 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.574391 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.574400 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.677611 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.677689 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.677709 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.677737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.677957 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.781534 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.781598 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.781609 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.781633 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.781647 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.873482 5065 scope.go:117] "RemoveContainer" containerID="147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f" Oct 08 13:19:21 crc kubenswrapper[5065]: E1008 13:19:21.873633 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.873713 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.873742 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.873808 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:21 crc kubenswrapper[5065]: E1008 13:19:21.873898 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:21 crc kubenswrapper[5065]: E1008 13:19:21.874007 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:21 crc kubenswrapper[5065]: E1008 13:19:21.874089 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.885056 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.885098 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.885112 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.885130 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.885144 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.988338 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.988465 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.988505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.988536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:21 crc kubenswrapper[5065]: I1008 13:19:21.988558 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:21Z","lastTransitionTime":"2025-10-08T13:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.091865 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.091909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.091919 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.091935 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.091944 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.194271 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.194314 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.194327 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.194343 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.194354 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.297434 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.297469 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.297477 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.297492 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.297500 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.400536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.400576 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.400587 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.400604 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.400615 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.503587 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.503654 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.503674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.503693 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.503705 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.607343 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.607458 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.607486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.607517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.607538 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.710398 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.710449 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.710458 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.710474 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.710484 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.813234 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.813273 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.813287 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.813307 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.813318 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.873621 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:22 crc kubenswrapper[5065]: E1008 13:19:22.873881 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.915451 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.915501 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.915514 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.915531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:22 crc kubenswrapper[5065]: I1008 13:19:22.915543 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:22Z","lastTransitionTime":"2025-10-08T13:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.018663 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.018717 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.018730 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.018748 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.018760 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.121656 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.121689 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.121700 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.121716 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.121727 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.225248 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.225286 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.225297 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.225312 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.225323 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.328088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.328138 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.328155 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.328198 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.328217 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.430868 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.430909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.430917 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.430934 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.430947 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.442463 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:23 crc kubenswrapper[5065]: E1008 13:19:23.442653 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:19:23 crc kubenswrapper[5065]: E1008 13:19:23.442724 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:19:55.442702043 +0000 UTC m=+97.220083800 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.533480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.533532 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.533549 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.533572 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.533588 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.635169 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.635200 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.635209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.635222 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.635232 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.738326 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.738368 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.738382 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.738398 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.738427 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.840153 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.840184 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.840191 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.840205 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.840217 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.873398 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.873398 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:23 crc kubenswrapper[5065]: E1008 13:19:23.873631 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:23 crc kubenswrapper[5065]: E1008 13:19:23.873545 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.873433 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:23 crc kubenswrapper[5065]: E1008 13:19:23.873716 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.942187 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.942229 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.942238 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.942254 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:23 crc kubenswrapper[5065]: I1008 13:19:23.942264 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:23Z","lastTransitionTime":"2025-10-08T13:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.045257 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.045304 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.045314 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.045329 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.045340 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.147934 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.148213 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.148349 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.148472 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.148573 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.251043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.251273 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.251338 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.251406 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.251493 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.354368 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.354679 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.354759 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.354842 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.354915 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.457786 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.457844 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.457856 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.457876 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.457887 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.560030 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.560079 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.560091 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.560108 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.560120 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.662738 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.662785 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.662796 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.662812 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.662823 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.765265 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.765305 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.765316 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.765334 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.765357 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.867076 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.867108 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.867115 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.867130 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.867138 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.878784 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:24 crc kubenswrapper[5065]: E1008 13:19:24.878991 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.969984 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.970025 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.970036 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.970051 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.970062 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.984821 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.984867 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.984877 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.984894 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:24 crc kubenswrapper[5065]: I1008 13:19:24.984907 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:24Z","lastTransitionTime":"2025-10-08T13:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:24 crc kubenswrapper[5065]: E1008 13:19:24.997774 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:24Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.001882 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.001925 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.001937 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.001955 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.001967 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.013760 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:25Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.017387 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.017442 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.017453 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.017471 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.017482 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.029800 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:25Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.033761 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.033807 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.033817 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.033834 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.033849 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.045020 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:25Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.048105 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.048134 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.048144 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.048158 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.048168 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.059902 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:25Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.060079 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.072385 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.072437 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.072448 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.072462 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.072471 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.174965 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.175015 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.175027 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.175046 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.175058 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.281473 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.281520 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.281567 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.283291 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.283367 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.385918 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.385955 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.385964 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.385977 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.385988 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.487988 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.488028 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.488040 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.488056 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.488068 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.590240 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.590270 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.590279 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.590292 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.590300 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.693330 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.693373 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.693383 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.693400 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.693428 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.796040 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.796086 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.796100 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.796119 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.796132 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.873051 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.873137 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.873079 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.873253 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.873396 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:25 crc kubenswrapper[5065]: E1008 13:19:25.873528 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.899079 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.899127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.899137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.899154 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:25 crc kubenswrapper[5065]: I1008 13:19:25.899163 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:25Z","lastTransitionTime":"2025-10-08T13:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.001907 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.001968 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.001987 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.002013 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.002031 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.104472 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.104556 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.104576 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.104593 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.104604 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.207337 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.207382 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.207393 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.207428 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.207440 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.240877 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/0.log" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.240958 5065 generic.go:334] "Generic (PLEG): container finished" podID="ddc2ce1c-bf76-4663-a2d6-e518ff7a4678" containerID="72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c" exitCode=1 Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.240999 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerDied","Data":"72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.241471 5065 scope.go:117] "RemoveContainer" containerID="72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.255740 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.269304 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.285692 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.298794 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.309169 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.309191 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.309201 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.309215 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.309223 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.312258 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.323249 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.335896 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.347087 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.363889 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.378316 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.398061 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.407489 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.411286 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.411315 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.411323 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.411339 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.411348 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.417545 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.427680 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.440582 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.454313 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.465599 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.475154 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:26Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.513950 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.513979 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.513999 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.514012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.514023 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.615811 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.615845 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.615852 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.615865 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.616392 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.727854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.728141 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.728449 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.728529 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.728590 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.831427 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.831459 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.831470 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.831485 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.831493 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.873506 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:26 crc kubenswrapper[5065]: E1008 13:19:26.873643 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.934256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.934515 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.934616 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.934707 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:26 crc kubenswrapper[5065]: I1008 13:19:26.934786 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:26Z","lastTransitionTime":"2025-10-08T13:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.038090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.038773 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.038880 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.038986 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.039054 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.141510 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.141799 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.141901 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.141989 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.142095 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.243975 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.244008 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.244016 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.244031 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.244039 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.245712 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/0.log" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.245765 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerStarted","Data":"bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.256559 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.265359 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.275290 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.287996 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.296089 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.306132 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.314252 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.331842 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.343659 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.346573 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.346596 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.346603 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.346616 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.346624 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.357641 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.377201 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.386532 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.396956 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.407997 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.418381 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.428719 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.439287 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.448319 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:27Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.449291 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.449324 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.449333 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.449346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.449355 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.552093 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.552121 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.552129 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.552143 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.552153 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.654192 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.654430 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.654547 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.654653 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.654740 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.756790 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.756826 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.756835 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.756850 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.756860 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.859695 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.859745 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.859756 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.859778 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.859790 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.873180 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.873272 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:27 crc kubenswrapper[5065]: E1008 13:19:27.873303 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.873471 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:27 crc kubenswrapper[5065]: E1008 13:19:27.873534 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:27 crc kubenswrapper[5065]: E1008 13:19:27.873687 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.963063 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.963110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.963127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.963145 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:27 crc kubenswrapper[5065]: I1008 13:19:27.963158 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:27Z","lastTransitionTime":"2025-10-08T13:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.071046 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.071087 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.071095 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.071111 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.071119 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.173540 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.173639 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.173652 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.173675 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.173689 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.275805 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.275856 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.275874 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.275891 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.275902 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.378433 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.378473 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.378480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.378493 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.378501 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.480188 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.480243 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.480260 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.480284 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.480300 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.583041 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.583105 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.583114 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.583129 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.583139 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.685485 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.685536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.685547 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.685565 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.685580 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.787992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.788028 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.788039 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.788055 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.788065 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.872905 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:28 crc kubenswrapper[5065]: E1008 13:19:28.873062 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.882211 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.890872 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.890911 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.890920 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.890936 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.890946 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.892105 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.901469 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.916867 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.930531 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.944821 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.957511 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.975200 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.987373 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.993503 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.993549 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.993563 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.993584 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:28 crc kubenswrapper[5065]: I1008 13:19:28.993597 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:28Z","lastTransitionTime":"2025-10-08T13:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.000564 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:28Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.015653 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.027570 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.054842 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.068344 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.080123 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.092510 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.096288 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.096343 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.096358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.096376 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.096387 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.105057 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.115968 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:29Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.198354 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.198500 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.198513 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.198530 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.198541 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.301020 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.301061 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.301072 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.301088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.301098 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.404061 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.404094 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.404102 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.404115 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.404124 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.506395 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.506464 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.506473 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.506487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.506520 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.608258 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.608291 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.608301 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.608316 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.608327 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.710914 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.710972 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.710981 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.710998 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.711009 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.813358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.813457 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.813476 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.813505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.813526 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.872882 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.872976 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.872903 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:29 crc kubenswrapper[5065]: E1008 13:19:29.873018 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:29 crc kubenswrapper[5065]: E1008 13:19:29.873173 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:29 crc kubenswrapper[5065]: E1008 13:19:29.873279 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.916727 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.916770 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.916782 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.916800 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:29 crc kubenswrapper[5065]: I1008 13:19:29.916810 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:29Z","lastTransitionTime":"2025-10-08T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.020061 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.020120 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.020137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.020160 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.020177 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.123000 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.123048 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.123060 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.123081 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.123094 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.225338 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.225397 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.225405 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.225431 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.225441 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.327900 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.327937 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.327947 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.327963 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.327972 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.430692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.430760 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.430777 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.430806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.430825 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.532830 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.532898 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.532910 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.532931 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.532944 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.634972 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.635013 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.635022 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.635035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.635043 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.737988 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.738045 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.738066 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.738095 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.738117 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.840818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.840863 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.840872 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.840886 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.840895 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.873527 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:30 crc kubenswrapper[5065]: E1008 13:19:30.873704 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.943617 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.943680 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.943697 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.943742 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:30 crc kubenswrapper[5065]: I1008 13:19:30.943778 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:30Z","lastTransitionTime":"2025-10-08T13:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.046492 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.046584 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.046596 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.046628 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.046636 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.148685 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.148724 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.148737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.148753 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.148764 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.250864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.250916 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.250926 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.250941 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.250953 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.353127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.353165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.353175 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.353189 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.353199 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.455797 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.455838 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.455849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.455866 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.455881 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.557775 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.557825 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.557842 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.557865 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.557882 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.661920 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.661965 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.661979 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.662000 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.662016 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.764546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.764806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.764886 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.764963 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.765034 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.868153 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.868440 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.868521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.868592 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.868667 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.872473 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.872553 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.872476 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:31 crc kubenswrapper[5065]: E1008 13:19:31.872805 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:31 crc kubenswrapper[5065]: E1008 13:19:31.872694 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:31 crc kubenswrapper[5065]: E1008 13:19:31.872569 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.970682 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.970932 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.971016 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.971119 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:31 crc kubenswrapper[5065]: I1008 13:19:31.971186 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:31Z","lastTransitionTime":"2025-10-08T13:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.073820 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.073867 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.073884 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.073902 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.073915 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.175919 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.175973 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.175990 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.176013 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.176029 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.277500 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.277680 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.277735 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.277779 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.277806 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.380124 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.380174 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.380190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.380212 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.380229 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.483406 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.483487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.483505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.483529 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.483546 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.586947 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.587023 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.587042 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.587247 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.587276 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.689563 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.689611 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.689622 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.689637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.689648 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.792398 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.792452 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.792463 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.792480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.792490 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.873056 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:32 crc kubenswrapper[5065]: E1008 13:19:32.873337 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.895020 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.895065 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.895077 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.895092 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.895104 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.998505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.998551 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.998563 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.998579 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:32 crc kubenswrapper[5065]: I1008 13:19:32.998591 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:32Z","lastTransitionTime":"2025-10-08T13:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.100982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.101032 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.101043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.101060 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.101073 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.204685 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.204742 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.204754 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.204773 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.204785 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.307866 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.307918 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.307930 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.307949 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.307959 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.410956 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.411018 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.411034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.411051 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.411063 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.513132 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.513219 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.513241 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.513267 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.513285 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.616665 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.616725 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.616740 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.616761 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.616776 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.719176 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.719240 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.719256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.719279 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.719297 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.821961 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.822005 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.822020 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.822035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.822045 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.873271 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.873526 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.873551 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:33 crc kubenswrapper[5065]: E1008 13:19:33.873636 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:33 crc kubenswrapper[5065]: E1008 13:19:33.873766 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:33 crc kubenswrapper[5065]: E1008 13:19:33.873912 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.876387 5065 scope.go:117] "RemoveContainer" containerID="147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.889897 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.925619 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.925955 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.925971 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.925995 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:33 crc kubenswrapper[5065]: I1008 13:19:33.926014 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:33Z","lastTransitionTime":"2025-10-08T13:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.028381 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.028437 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.028456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.028471 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.028479 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.130456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.130499 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.130514 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.130530 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.130539 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.233857 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.233890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.233899 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.233914 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.233924 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.266278 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/2.log" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.269161 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.269880 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.285799 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.299905 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.320371 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.336132 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.336789 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.336854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.336863 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.336880 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.336891 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.346678 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.356533 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.377024 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.391152 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.407589 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.418016 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.439133 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.439171 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.439203 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.439219 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.439230 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.442950 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.454472 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.470782 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.483486 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.498708 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.509652 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f2cb86-b1bd-4d02-9812-29085f4b534f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f47fb4e50df5a6c060421f131f23d561f71d0e8bfa1a9769fedf8380d9162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.523023 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.533729 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.541461 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.541499 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.541507 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.541521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.541531 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.544013 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:34Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.644149 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.644190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.644205 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.644224 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.644239 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.746989 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.747032 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.747040 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.747057 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.747066 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.849757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.849800 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.849812 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.849827 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.849835 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.873133 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:34 crc kubenswrapper[5065]: E1008 13:19:34.873493 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.952557 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.952633 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.952650 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.952673 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:34 crc kubenswrapper[5065]: I1008 13:19:34.952691 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:34Z","lastTransitionTime":"2025-10-08T13:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.055406 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.055461 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.055494 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.055510 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.055519 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.157958 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.158006 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.158015 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.158030 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.158039 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.260480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.260521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.260534 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.260551 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.260562 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.272603 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/3.log" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.273246 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/2.log" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.275940 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" exitCode=1 Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.275977 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.276007 5065 scope.go:117] "RemoveContainer" containerID="147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.276762 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.276946 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.291289 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.303542 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.313942 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f2cb86-b1bd-4d02-9812-29085f4b534f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f47fb4e50df5a6c060421f131f23d561f71d0e8bfa1a9769fedf8380d9162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.329071 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.341578 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.351482 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.364910 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.364960 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.364973 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.365002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.365016 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.368243 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.381703 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.403760 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://147f00e5a84aabf97a267f10feb97f2e8c213266838359f6a0d016b07d2ba08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:09Z\\\",\\\"message\\\":\\\"rator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624054 6813 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-96g69 openshift-machine-config-operator/machine-config-daemon-f2pbj openshift-multus/network-metrics-daemon-6nwh2 openshift-multus/multus-additional-cni-plugins-8xgfx openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-fdcv2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-etcd/etcd-crc openshift-multus/multus-dkvkk]\\\\nI1008 13:19:09.624066 6813 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1008 13:19:09.624080 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1008 13:19:09.624088 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:34Z\\\",\\\"message\\\":\\\" after 0 failed attempt(s)\\\\nI1008 13:19:34.710545 7153 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.123099829 seconds. No OVN measurement.\\\\nI1008 13:19:34.710550 7153 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-96g69\\\\nI1008 13:19:34.710652 7153 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710666 7153 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710678 7153 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1008 13:19:34.710712 7153 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1008 13:19:34.710766 7153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:19:34.710837 7153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.406786 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.406814 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.406829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.406844 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.406853 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.419988 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.420924 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.424165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.424195 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.424205 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.424219 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.424229 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.437733 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.439808 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.443226 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.443268 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.443278 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.443293 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.443305 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.457383 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.458486 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.461634 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.461661 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.461669 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.461682 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.461690 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.471327 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.474575 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.479865 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.479910 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.479920 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.479935 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.479943 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.482028 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.492075 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.492191 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.493593 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.493817 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.493850 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.493861 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.493876 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.493885 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.504087 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.515801 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.536789 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.548065 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:35Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.596591 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.596657 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.596679 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.596707 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.596729 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.699514 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.699575 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.699587 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.699607 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.699619 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.803118 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.803185 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.803197 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.803214 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.803225 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.873141 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.873193 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.873246 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.873394 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.873637 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:35 crc kubenswrapper[5065]: E1008 13:19:35.873710 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.906125 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.906182 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.906199 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.906220 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:35 crc kubenswrapper[5065]: I1008 13:19:35.906237 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:35Z","lastTransitionTime":"2025-10-08T13:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.008166 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.008213 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.008224 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.008241 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.008253 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.110896 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.110943 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.110956 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.110975 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.110989 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.213287 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.213323 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.213333 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.213348 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.213359 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.280341 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/3.log" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.283536 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:19:36 crc kubenswrapper[5065]: E1008 13:19:36.283850 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.294896 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f2cb86-b1bd-4d02-9812-29085f4b534f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f47fb4e50df5a6c060421f131f23d561f71d0e8bfa1a9769fedf8380d9162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.309960 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.315972 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.315998 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.316006 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.316019 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.316029 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.322584 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.336568 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.347653 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.358399 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.367723 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.389864 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.418515 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.418553 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.418562 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.418577 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.418588 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.420789 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.432457 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.443516 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.461102 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:34Z\\\",\\\"message\\\":\\\" after 0 failed attempt(s)\\\\nI1008 13:19:34.710545 7153 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.123099829 seconds. No OVN measurement.\\\\nI1008 13:19:34.710550 7153 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-96g69\\\\nI1008 13:19:34.710652 7153 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710666 7153 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710678 7153 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1008 13:19:34.710712 7153 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1008 13:19:34.710766 7153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:19:34.710837 7153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.471778 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.485008 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.503589 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.514643 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.520469 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.520508 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.520519 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.520536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.520547 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.527164 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.538380 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.550721 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:36Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.622852 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.622887 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.622895 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.622909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.622920 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.726116 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.726168 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.726181 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.726200 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.726214 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.827947 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.827985 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.827997 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.828015 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.828026 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.872694 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:36 crc kubenswrapper[5065]: E1008 13:19:36.872861 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.929818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.929857 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.929867 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.929883 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:36 crc kubenswrapper[5065]: I1008 13:19:36.929895 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:36Z","lastTransitionTime":"2025-10-08T13:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.031836 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.031872 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.031881 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.031895 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.031904 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.135368 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.135460 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.135486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.135514 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.135534 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.239354 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.239444 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.239460 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.239482 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.239497 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.342023 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.342059 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.342067 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.342081 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.342091 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.444557 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.444636 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.444648 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.444666 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.444679 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.547767 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.547839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.547858 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.547882 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.547900 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.650312 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.650371 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.650387 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.650451 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.650491 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.753025 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.753058 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.753066 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.753078 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.753086 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.856088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.856151 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.856168 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.856194 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.856213 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.873136 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.873184 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.873146 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:37 crc kubenswrapper[5065]: E1008 13:19:37.873315 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:37 crc kubenswrapper[5065]: E1008 13:19:37.873461 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:37 crc kubenswrapper[5065]: E1008 13:19:37.873540 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.958196 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.958254 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.958270 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.958293 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:37 crc kubenswrapper[5065]: I1008 13:19:37.958310 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:37Z","lastTransitionTime":"2025-10-08T13:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.061318 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.061368 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.061382 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.061403 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.061449 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.163905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.163946 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.163956 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.163971 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.163980 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.266449 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.266505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.266520 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.266542 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.266558 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.369124 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.369169 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.369180 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.369198 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.369210 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.472137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.472178 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.472189 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.472204 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.472215 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.574032 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.574074 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.574087 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.574106 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.574117 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.676903 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.676945 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.676956 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.676970 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.676979 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.779222 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.779257 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.779268 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.779284 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.779298 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.873486 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:38 crc kubenswrapper[5065]: E1008 13:19:38.873689 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.881217 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.881287 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.881309 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.881337 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.881360 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.896486 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.911979 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.926042 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.935456 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.946140 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.959404 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.978605 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:34Z\\\",\\\"message\\\":\\\" after 0 failed attempt(s)\\\\nI1008 13:19:34.710545 7153 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.123099829 seconds. No OVN measurement.\\\\nI1008 13:19:34.710550 7153 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-96g69\\\\nI1008 13:19:34.710652 7153 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710666 7153 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710678 7153 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1008 13:19:34.710712 7153 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1008 13:19:34.710766 7153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:19:34.710837 7153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.983730 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.983763 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.983772 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.983787 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.983800 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:38Z","lastTransitionTime":"2025-10-08T13:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:38 crc kubenswrapper[5065]: I1008 13:19:38.989181 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:38Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.007474 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.016748 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.028202 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.038184 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.050025 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.061713 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f2cb86-b1bd-4d02-9812-29085f4b534f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f47fb4e50df5a6c060421f131f23d561f71d0e8bfa1a9769fedf8380d9162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.077891 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.086622 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.086660 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.086670 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.086687 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.086697 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.090924 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.100087 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.112369 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.121964 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:39Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.188999 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.189060 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.189069 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.189084 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.189093 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.291137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.291176 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.291185 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.291200 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.291209 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.393028 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.393080 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.393090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.393105 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.393113 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.495446 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.495489 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.495500 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.495516 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.495527 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.597889 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.597932 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.597961 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.597977 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.597987 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.700600 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.700649 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.700657 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.700674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.700682 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.802734 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.802801 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.802813 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.802828 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.802837 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.872606 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.872707 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.872819 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:39 crc kubenswrapper[5065]: E1008 13:19:39.872813 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:39 crc kubenswrapper[5065]: E1008 13:19:39.872986 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:39 crc kubenswrapper[5065]: E1008 13:19:39.873068 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.905767 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.905829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.905851 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.905878 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:39 crc kubenswrapper[5065]: I1008 13:19:39.905899 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:39Z","lastTransitionTime":"2025-10-08T13:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.008327 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.008400 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.008456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.008491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.008513 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.111034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.111069 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.111077 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.111092 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.111102 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.214184 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.214228 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.214239 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.214256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.214268 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.316876 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.316924 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.316934 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.316952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.316962 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.419947 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.420026 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.420061 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.420090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.420112 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.522277 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.522344 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.522367 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.522399 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.522455 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.624369 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.624445 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.624458 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.624474 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.624485 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.726876 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.726910 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.726918 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.726933 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.726942 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.829719 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.830077 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.830091 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.830110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.830131 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.873371 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:40 crc kubenswrapper[5065]: E1008 13:19:40.873552 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.932546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.932617 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.932637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.932660 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:40 crc kubenswrapper[5065]: I1008 13:19:40.932677 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:40Z","lastTransitionTime":"2025-10-08T13:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.034578 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.034635 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.034655 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.034677 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.034694 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.136819 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.136856 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.136866 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.136882 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.136891 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.239218 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.239267 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.239279 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.239297 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.239310 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.341646 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.341696 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.341707 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.341726 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.341736 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.444577 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.444632 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.444654 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.444675 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.444691 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.547012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.547061 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.547076 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.547096 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.547111 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.649491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.649549 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.649559 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.649572 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.649581 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.751345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.751382 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.751394 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.751410 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.751479 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.822668 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.822755 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.822796 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.822835 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.822813434 +0000 UTC m=+147.600195201 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.822863 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.822902 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.822870 5065 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823074 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.823064791 +0000 UTC m=+147.600446548 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.822970 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823238 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823252 5065 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823283 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.823274066 +0000 UTC m=+147.600655823 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.822986 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823035 5065 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823597 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.823584895 +0000 UTC m=+147.600966652 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823584 5065 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823635 5065 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.823761 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.823731068 +0000 UTC m=+147.601112825 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.854285 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.854344 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.854357 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.854380 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.854395 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.872919 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.872932 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.873084 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.873056 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.873290 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:41 crc kubenswrapper[5065]: E1008 13:19:41.873398 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.957704 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.957757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.957773 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.957794 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:41 crc kubenswrapper[5065]: I1008 13:19:41.957809 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:41Z","lastTransitionTime":"2025-10-08T13:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.061334 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.061408 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.061480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.061514 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.061538 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.165984 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.166037 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.166050 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.166070 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.166084 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.269349 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.269446 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.269468 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.269491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.269509 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.372340 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.372453 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.372478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.372512 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.372533 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.475767 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.475840 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.475880 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.475919 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.475943 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.578777 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.578834 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.578851 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.578874 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.578890 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.682044 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.682117 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.682137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.682165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.682188 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.785992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.786058 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.786082 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.786108 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.786128 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.872932 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:42 crc kubenswrapper[5065]: E1008 13:19:42.873137 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.888519 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.888576 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.888593 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.888617 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.888636 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.991787 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.991838 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.991849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.991868 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:42 crc kubenswrapper[5065]: I1008 13:19:42.991879 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:42Z","lastTransitionTime":"2025-10-08T13:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.095004 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.095103 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.095128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.095158 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.095180 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.197973 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.198017 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.198031 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.198054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.198065 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.300816 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.300860 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.300871 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.300887 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.300899 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.403332 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.403381 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.403393 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.403411 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.403442 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.505666 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.505703 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.505715 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.505733 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.505744 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.608966 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.609054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.609085 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.609190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.609269 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.712646 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.712696 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.712715 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.712736 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.712751 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.815762 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.815838 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.815861 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.815891 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.815913 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.872774 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.872820 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:43 crc kubenswrapper[5065]: E1008 13:19:43.872950 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.873044 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:43 crc kubenswrapper[5065]: E1008 13:19:43.873057 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:43 crc kubenswrapper[5065]: E1008 13:19:43.873260 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.918619 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.918692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.918702 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.918725 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:43 crc kubenswrapper[5065]: I1008 13:19:43.918738 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:43Z","lastTransitionTime":"2025-10-08T13:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.022323 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.022374 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.022388 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.022409 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.022453 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.125189 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.125227 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.125235 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.125274 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.125294 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.228591 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.228678 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.228701 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.228731 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.228753 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.332162 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.332214 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.332224 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.332239 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.332250 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.435035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.435082 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.435094 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.435111 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.435123 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.537358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.537389 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.537402 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.537437 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.537449 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.640806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.640891 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.640916 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.640950 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.640978 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.743674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.743727 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.743739 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.743758 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.743774 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.846849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.846905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.846922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.846948 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.846966 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.872892 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:44 crc kubenswrapper[5065]: E1008 13:19:44.873121 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.949394 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.949458 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.949467 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.949480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:44 crc kubenswrapper[5065]: I1008 13:19:44.949490 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:44Z","lastTransitionTime":"2025-10-08T13:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.052093 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.052136 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.052144 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.052160 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.052168 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.154946 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.154983 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.154994 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.155011 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.155023 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.257090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.257127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.257135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.257148 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.257158 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.359752 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.359782 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.359790 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.359803 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.359812 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.462068 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.462103 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.462119 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.462137 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.462147 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.564142 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.564182 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.564190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.564204 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.564213 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.666525 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.666563 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.666577 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.666597 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.666612 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.769598 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.769652 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.769662 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.769682 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.769693 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.781197 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.781240 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.781251 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.781272 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.781287 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.794993 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.799862 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.799937 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.799950 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.799967 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.799978 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.820271 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.824214 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.824264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.824278 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.824296 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.824310 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.836658 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.840650 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.840692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.840703 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.840721 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.840733 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.854640 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.858210 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.858250 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.858258 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.858275 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.858285 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.870011 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:45Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.870177 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872117 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872179 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872201 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872222 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872237 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872520 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872556 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.872520 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.872640 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.873010 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:45 crc kubenswrapper[5065]: E1008 13:19:45.873653 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.974153 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.974206 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.974217 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.974233 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:45 crc kubenswrapper[5065]: I1008 13:19:45.974245 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:45Z","lastTransitionTime":"2025-10-08T13:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.077175 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.077209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.077217 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.077233 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.077243 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.179633 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.179672 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.179683 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.179698 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.179711 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.283112 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.283185 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.283201 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.283226 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.283241 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.386334 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.386404 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.386433 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.386452 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.386465 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.489593 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.489635 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.489644 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.489659 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.489667 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.592197 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.592234 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.592244 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.592264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.592273 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.694691 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.694728 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.694740 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.694755 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.694766 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.797487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.797524 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.797533 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.797551 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.797560 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.873572 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:46 crc kubenswrapper[5065]: E1008 13:19:46.873996 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.874530 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:19:46 crc kubenswrapper[5065]: E1008 13:19:46.874646 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.899982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.900063 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.900092 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.900105 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:46 crc kubenswrapper[5065]: I1008 13:19:46.900115 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:46Z","lastTransitionTime":"2025-10-08T13:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.003863 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.003903 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.003913 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.003929 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.003942 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.106883 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.106927 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.106941 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.106963 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.106976 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.209896 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.209980 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.209992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.210009 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.210023 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.313569 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.313622 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.313640 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.313665 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.313683 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.416395 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.416444 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.416453 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.416468 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.416476 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.518568 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.518663 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.518675 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.518693 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.518706 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.621472 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.621581 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.621595 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.621610 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.621646 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.724357 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.724403 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.724463 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.724481 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.724492 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.827216 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.827259 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.827270 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.827286 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.827295 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.873335 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.873454 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.873335 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:47 crc kubenswrapper[5065]: E1008 13:19:47.873644 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:47 crc kubenswrapper[5065]: E1008 13:19:47.873936 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:47 crc kubenswrapper[5065]: E1008 13:19:47.874111 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.929056 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.929088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.929096 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.929110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:47 crc kubenswrapper[5065]: I1008 13:19:47.929122 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:47Z","lastTransitionTime":"2025-10-08T13:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.031546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.031591 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.031599 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.031615 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.031630 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.133952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.134030 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.134047 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.134072 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.134089 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.237409 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.237495 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.237515 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.237536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.237549 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.339979 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.340033 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.340043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.340062 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.340074 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.442850 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.442905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.442916 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.442932 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.442943 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.546014 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.546070 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.546081 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.546100 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.546113 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.649508 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.649558 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.649568 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.649582 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.649591 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.751809 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.751882 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.751905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.751933 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.751954 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.854259 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.854323 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.854340 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.854357 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.854450 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.872854 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:48 crc kubenswrapper[5065]: E1008 13:19:48.872982 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.888530 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.903017 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.919317 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.936605 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.955991 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.956580 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.956622 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.956635 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.956651 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.956663 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:48Z","lastTransitionTime":"2025-10-08T13:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.976097 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:48 crc kubenswrapper[5065]: I1008 13:19:48.989578 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:48Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.009897 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.024446 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.053855 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:34Z\\\",\\\"message\\\":\\\" after 0 failed attempt(s)\\\\nI1008 13:19:34.710545 7153 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.123099829 seconds. No OVN measurement.\\\\nI1008 13:19:34.710550 7153 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-96g69\\\\nI1008 13:19:34.710652 7153 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710666 7153 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710678 7153 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1008 13:19:34.710712 7153 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1008 13:19:34.710766 7153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:19:34.710837 7153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.058887 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.058959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.058981 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.059006 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.059023 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.087709 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.105373 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.122099 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.138342 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.155069 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.162221 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.162266 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.162276 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.162296 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.162311 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.170921 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f2cb86-b1bd-4d02-9812-29085f4b534f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f47fb4e50df5a6c060421f131f23d561f71d0e8bfa1a9769fedf8380d9162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.191376 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.213153 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.229164 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:49Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.265542 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.265609 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.265632 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.265670 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.265696 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.368505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.368543 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.368552 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.368569 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.368578 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.470809 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.470845 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.470854 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.470869 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.470878 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.572736 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.572780 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.572791 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.572808 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.572820 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.674881 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.674923 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.674932 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.674947 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.674956 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.777493 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.777564 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.777584 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.777611 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.777631 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.872934 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.872978 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.873021 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:49 crc kubenswrapper[5065]: E1008 13:19:49.873140 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:49 crc kubenswrapper[5065]: E1008 13:19:49.873244 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:49 crc kubenswrapper[5065]: E1008 13:19:49.873305 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.879491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.879562 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.879581 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.879606 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.879623 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.982280 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.982334 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.982345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.982361 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:49 crc kubenswrapper[5065]: I1008 13:19:49.982371 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:49Z","lastTransitionTime":"2025-10-08T13:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.084739 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.084784 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.084818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.084837 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.084847 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.187076 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.187112 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.187121 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.187135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.187144 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.288748 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.288788 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.288798 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.288812 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.288822 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.391438 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.391478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.391486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.391501 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.391511 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.493072 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.493110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.493118 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.493132 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.493142 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.595533 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.595572 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.595583 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.595598 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.595609 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.698611 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.698660 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.698671 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.698688 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.698701 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.802451 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.802509 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.802523 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.802547 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.802564 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.873679 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:50 crc kubenswrapper[5065]: E1008 13:19:50.873865 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.905173 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.905222 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.905235 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.905253 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:50 crc kubenswrapper[5065]: I1008 13:19:50.905269 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:50Z","lastTransitionTime":"2025-10-08T13:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.007987 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.008036 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.008048 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.008068 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.008080 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.110438 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.110480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.110496 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.110512 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.110523 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.212825 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.212890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.212901 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.212916 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.212927 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.315998 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.316054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.316069 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.316091 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.316106 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.418383 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.418618 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.418631 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.418650 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.418664 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.521020 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.521091 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.521114 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.521140 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.521157 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.624604 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.624637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.624648 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.624666 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.624678 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.727073 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.727152 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.727174 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.727205 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.727226 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.830089 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.830128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.830136 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.830148 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.830157 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.873168 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.873170 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:51 crc kubenswrapper[5065]: E1008 13:19:51.873386 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.873195 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:51 crc kubenswrapper[5065]: E1008 13:19:51.873500 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:51 crc kubenswrapper[5065]: E1008 13:19:51.873692 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.932957 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.933014 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.933031 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.933054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:51 crc kubenswrapper[5065]: I1008 13:19:51.933072 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:51Z","lastTransitionTime":"2025-10-08T13:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.036124 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.036202 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.036221 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.036246 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.036263 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.138833 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.138914 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.138938 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.138969 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.138992 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.240872 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.240934 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.240956 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.240986 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.241008 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.343083 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.343134 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.343147 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.343165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.343178 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.445799 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.445853 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.445864 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.445881 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.445894 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.549003 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.549065 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.549077 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.549098 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.549111 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.651862 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.652008 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.652025 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.652050 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.652068 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.755043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.755140 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.755150 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.755165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.755175 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.858170 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.858210 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.858220 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.858237 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.858247 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.873374 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:52 crc kubenswrapper[5065]: E1008 13:19:52.873630 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.960408 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.960472 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.960481 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.960496 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:52 crc kubenswrapper[5065]: I1008 13:19:52.960505 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:52Z","lastTransitionTime":"2025-10-08T13:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.063779 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.063847 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.063867 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.063892 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.063910 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.167352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.167411 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.167486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.167514 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.167532 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.270370 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.270440 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.270454 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.270472 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.270484 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.372728 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.372804 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.372821 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.372847 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.372863 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.475564 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.475622 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.475637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.475659 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.475673 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.578892 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.578935 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.578948 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.578965 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.578978 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.681650 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.681683 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.681692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.681709 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.681718 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.784044 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.784090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.784099 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.784115 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.784151 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.872501 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.872543 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.872543 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:53 crc kubenswrapper[5065]: E1008 13:19:53.872626 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:53 crc kubenswrapper[5065]: E1008 13:19:53.872826 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:53 crc kubenswrapper[5065]: E1008 13:19:53.872906 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.886571 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.886618 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.886634 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.886783 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.886814 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.990337 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.990379 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.990391 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.990407 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:53 crc kubenswrapper[5065]: I1008 13:19:53.990435 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:53Z","lastTransitionTime":"2025-10-08T13:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.093126 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.093169 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.093182 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.093197 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.093207 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.195250 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.195287 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.195296 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.195314 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.195334 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.297967 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.298029 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.298046 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.298070 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.298088 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.401487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.401525 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.401536 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.401551 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.401560 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.504274 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.504330 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.504342 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.504363 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.504374 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.608109 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.608725 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.608745 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.608775 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.608795 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.711992 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.712056 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.712079 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.712110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.712131 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.814822 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.814890 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.814911 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.814943 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.814964 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.873117 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:54 crc kubenswrapper[5065]: E1008 13:19:54.873382 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.917357 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.917455 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.917480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.917528 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:54 crc kubenswrapper[5065]: I1008 13:19:54.917551 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:54Z","lastTransitionTime":"2025-10-08T13:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.022107 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.022157 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.022168 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.022187 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.022202 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.125512 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.125598 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.125617 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.125643 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.125663 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.228504 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.228547 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.228563 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.228589 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.228606 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.332368 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.332459 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.332480 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.332507 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.332526 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.434775 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.434807 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.434815 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.434831 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.434842 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: E1008 13:19:55.465543 5065 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:19:55 crc kubenswrapper[5065]: E1008 13:19:55.465628 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs podName:c8a38e7c-bbc4-4255-ab4e-a056eb0655be nodeName:}" failed. No retries permitted until 2025-10-08 13:20:59.465612297 +0000 UTC m=+161.242994054 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs") pod "network-metrics-daemon-6nwh2" (UID: "c8a38e7c-bbc4-4255-ab4e-a056eb0655be") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.465791 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.537668 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.537700 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.537712 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.537726 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.537737 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.640288 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.640328 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.640339 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.640355 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.640370 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.742539 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.742594 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.742614 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.742637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.742654 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.844682 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.844737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.844756 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.844780 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.844797 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.872547 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.872571 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:55 crc kubenswrapper[5065]: E1008 13:19:55.872698 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.872801 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:55 crc kubenswrapper[5065]: E1008 13:19:55.872931 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:55 crc kubenswrapper[5065]: E1008 13:19:55.873026 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.947918 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.947961 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.947977 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.948001 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:55 crc kubenswrapper[5065]: I1008 13:19:55.948019 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:55Z","lastTransitionTime":"2025-10-08T13:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.050795 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.050858 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.050880 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.050907 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.050928 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.110232 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.110335 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.110357 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.110384 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.110405 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: E1008 13:19:56.132277 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:56Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.136569 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.136604 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.136613 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.136628 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.136637 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: E1008 13:19:56.154555 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:56Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.159273 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.159337 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.159352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.159374 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.159387 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: E1008 13:19:56.174522 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:56Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.178525 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.178608 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.178627 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.178650 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.178684 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: E1008 13:19:56.193391 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:56Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.198279 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.198336 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.198349 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.198372 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.198386 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: E1008 13:19:56.211112 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:56Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:56 crc kubenswrapper[5065]: E1008 13:19:56.211279 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.212922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.212979 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.212996 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.213026 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.213048 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.315220 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.315280 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.315295 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.315316 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.315341 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.417702 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.417779 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.417795 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.417814 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.417826 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.520627 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.520671 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.520680 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.520695 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.520705 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.623862 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.623923 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.623936 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.623959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.623972 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.726814 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.726894 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.726918 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.726948 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.726969 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.830358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.830430 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.830445 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.830462 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.830474 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.873443 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:56 crc kubenswrapper[5065]: E1008 13:19:56.874035 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.933031 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.933135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.933155 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.933176 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:56 crc kubenswrapper[5065]: I1008 13:19:56.933190 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:56Z","lastTransitionTime":"2025-10-08T13:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.035389 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.035692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.035793 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.035899 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.035987 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.139067 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.139118 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.139134 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.139153 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.139168 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.242486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.242530 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.242544 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.242565 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.242580 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.346130 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.346196 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.346221 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.346250 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.346273 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.448546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.448990 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.449405 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.449830 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.450176 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.552999 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.553035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.553045 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.553061 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.553071 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.656524 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.656898 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.657126 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.657344 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.657601 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.759717 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.759998 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.760151 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.760215 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.760276 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.863219 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.863287 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.863307 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.863335 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.863355 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.872895 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.872944 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.873277 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:57 crc kubenswrapper[5065]: E1008 13:19:57.873398 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:57 crc kubenswrapper[5065]: E1008 13:19:57.873514 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:57 crc kubenswrapper[5065]: E1008 13:19:57.873609 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.874071 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:19:57 crc kubenswrapper[5065]: E1008 13:19:57.874178 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.965620 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.965691 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.965706 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.965725 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:57 crc kubenswrapper[5065]: I1008 13:19:57.965737 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:57Z","lastTransitionTime":"2025-10-08T13:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.068141 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.068184 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.068193 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.068209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.068219 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.171119 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.171179 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.171195 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.171221 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.171237 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.274634 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.274709 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.274737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.274766 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.274783 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.376769 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.376839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.376857 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.376881 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.376899 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.478651 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.478693 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.478704 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.478722 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.478732 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.581239 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.581323 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.581348 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.581372 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.581388 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.684755 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.684829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.684856 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.684886 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.684907 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.788697 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.788782 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.788804 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.788833 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.788854 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.873295 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:19:58 crc kubenswrapper[5065]: E1008 13:19:58.873685 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.891594 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.891667 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.891692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.891724 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.891748 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.895334 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21825a9e-72d6-4850-af25-cafacf1ffff4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d640108e0f7a7b637b8637c8138400956ba76ec25edd7e162f1713313a271a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd0044741d752e87cfb724853e32b2c3253a050549d2a79642d7d6bf7d10fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://171952d40d4d28a88cb96f17b1278f68747d8d6576f82ffd05557f3f6a837ee5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6a4e6cf8396b699583eb4cce414ad4f1f744217a41a508e0e865564d8f78b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1124eb29053c2121ba3f99e69832b4da49e39bbc15d1cc52cc5f675ef4d8f430\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://477d43fb5deacb50fd388fdcbfa64a47af0f73840667d5ce84de4e90588ebd74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://227ef366a86d1d6f9e27951234644c781f1fc056f0feab4235595b8cb70dd97b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx46q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8xgfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.913773 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bb62c5d-316d-4a3c-95ff-7b1de710d481\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd2b5c981a1f2fc80e3c440d08d5155e5e1b8af517f79eb2d05b94e0c53ac99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d42c63dcca1a8882e15d893bbb6526f14834e017582081b0e2f41eb8a1b0de1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wnhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mzjf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.932199 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713c51177ace8e10744bfb2c72dac7190f3f98e94acd6669005ab1c512b9fe87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.948372 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7d2jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43581862-a068-411a-b8f4-c06aa7951856\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d512d2e6f81bba6ebb9fef45492af020d8591633dbad40356238865dc3fb4706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2nt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7d2jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.965394 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.983021 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c326c4ef62d6a1ee164217467c92551ca365cd6f7d69bb01581f1f0195e8a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:58Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.995380 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.995434 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.995446 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.995461 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:58 crc kubenswrapper[5065]: I1008 13:19:58.995472 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:58Z","lastTransitionTime":"2025-10-08T13:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.004273 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:34Z\\\",\\\"message\\\":\\\" after 0 failed attempt(s)\\\\nI1008 13:19:34.710545 7153 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.123099829 seconds. No OVN measurement.\\\\nI1008 13:19:34.710550 7153 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-96g69\\\\nI1008 13:19:34.710652 7153 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710666 7153 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 13:19:34.710678 7153 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1008 13:19:34.710712 7153 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1008 13:19:34.710766 7153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 13:19:34.710837 7153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:19:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xftmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96g69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.023516 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dkvkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T13:19:25Z\\\",\\\"message\\\":\\\"2025-10-08T13:18:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f\\\\n2025-10-08T13:18:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f101c1d-3f30-4d42-bbfb-d196458bd81f to /host/opt/cni/bin/\\\\n2025-10-08T13:18:40Z [verbose] multus-daemon started\\\\n2025-10-08T13:18:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T13:19:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwdsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dkvkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.043063 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811e699c-f965-4344-ae9d-d9d56cdad072\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da2fb253ed4b8509a36e325870783353b37696743838a4652ec14604bb79150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a13f5456ee649ad11722862bfb5ed8213ac43b907bfc407dfd7e1d5b7339acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a925de64763c0a333325b9e6e9b283ac81bde95c508e8afa6219a3ce1ebcc262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2169aa5ec06f84641ca47ad8f77eee8d5cc09a3ab96a545f615d9e57b59149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3878b6a4e480475a5201681a6c6b553a965dd6e80904569775e7a02768e1b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d53bdb5b6f4d5a1ee8b32f9f8e5c26fc02272a6aee1c2b36457803aaee4db2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4858e4d867bcd28db85a11cee0763c5cba0932bd3f4831227b8873eacb039898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://673487bb787cb1e2621ef48ca7b58d5bf73af4b866a148257052ab278921b4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.058323 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e63c8511-ce18-4344-b40d-a2868aafd953\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f437667914b286a4a5be10b7d8e0ff79549b694e7a427b67e403abd0cf67496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5b09ea08287ed83d2bac95c8b6780b91269b8507b63b1324242eb2f2a7fe840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dac57ae099af0a2f05f17da9ddc0853b5513bc747fd5f0aa959d7f3baca74b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d00991f922ab27db815da8cf772a571e7dadaa31374e79a4074a2a8054f7f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.075332 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.090642 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671b8a167bbc48002f898fe4f1a043ab47ca21f22016dd5193b18e3ba0fcb301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0d148ff040ced7476ee4cae9bd0aeecb7217a861a7eafa38f08eff3c850ddc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.098430 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.098467 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.098478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.098495 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.098509 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.106171 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.120451 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83f2cb86-b1bd-4d02-9812-29085f4b534f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4f47fb4e50df5a6c060421f131f23d561f71d0e8bfa1a9769fedf8380d9162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4d9fe3f4d963a309301eaa88fd3966e348086d02d4b5646e77dd634b3795fff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.139594 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c8cd27d-144a-4698-97fa-e53b9fd72931\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac05978aceb2470b4df2ec36008da1b93ed4ebd3c4078349f4c9fdca72a499e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326d2a678075112231824371aab0629d36360e641cf41324e7df7137e40d989d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8632b5bcd37f3f32df16ab339a08d4e0093f0361f05bc27d7c2540cd819131bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4772b4b3685c623269f7d5aac4f625a8797c7eca55db6fd7ab32c516f6039c81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc9dfc2316a66e090d240be764ce3a1b6b207c0431049d6f1e116f6673e355f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"message\\\":\\\"le observer\\\\nW1008 13:18:37.199611 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 13:18:37.199759 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 13:18:37.201305 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1818855429/tls.crt::/tmp/serving-cert-1818855429/tls.key\\\\\\\"\\\\nI1008 13:18:37.617110 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 13:18:37.620289 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 13:18:37.620305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 13:18:37.620326 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 13:18:37.620332 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 13:18:37.626101 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1008 13:18:37.626104 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1008 13:18:37.626138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626148 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 13:18:37.626154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 13:18:37.626157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 13:18:37.626161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 13:18:37.626167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1008 13:18:37.627759 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa8418f47b18f079fff49e29743e2ccc03753e2bc4e9335f9887cd2ae95b2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9cb5050ae351410441d21b0f23f32c59cf1938bfe33b127ed33776465a49c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.156079 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beaec52f-b8e2-49e7-b145-e850ae4e9a8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a9c20831d81be95a224ee6ad93dc6e7624a8a774838719072e1be8d6caf875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67a460a17d6be328faa6935164fe5d886ed5ffe13a39449c213f7560e966a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5fa49cc122f5dc7770e0dc692c7dd34fa64e9a664386c634dc3eb158718bac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a165418e59f89a57a1dcd49f45eca2c0f8d4d3e0180c791c9e377e0e74657d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.168177 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fdcv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb1473-7275-422e-b8fd-e4f9869950d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7ff1e57acadef90d1f7f9acc9ade817891664a69065c968d8f74df20fc2aeba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw9vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fdcv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.181969 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ed57245f64a219fd8320d6c16b3849fae4aef818f906a0ccf00851492907c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T13:18:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgs67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f2pbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.193741 5065 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T13:18:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvfvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T13:18:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6nwh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:19:59Z is after 2025-08-24T17:21:41Z" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.200740 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.200800 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.200817 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.200850 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.200867 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.303967 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.304010 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.304021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.304038 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.304049 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.406184 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.406222 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.406231 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.406245 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.406254 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.508568 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.508614 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.508629 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.508648 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.508658 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.611900 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.611990 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.612017 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.612044 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.612061 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.714577 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.714644 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.714661 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.714686 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.714704 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.816696 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.816770 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.816787 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.816806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.816823 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.872763 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:19:59 crc kubenswrapper[5065]: E1008 13:19:59.872900 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.872796 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.872774 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:19:59 crc kubenswrapper[5065]: E1008 13:19:59.872986 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:19:59 crc kubenswrapper[5065]: E1008 13:19:59.873120 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.919642 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.919709 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.919726 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.919752 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:19:59 crc kubenswrapper[5065]: I1008 13:19:59.919771 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:19:59Z","lastTransitionTime":"2025-10-08T13:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.022170 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.022232 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.022244 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.022264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.022279 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.124304 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.124351 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.124366 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.124382 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.124391 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.226252 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.226299 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.226311 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.226337 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.226379 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.332461 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.332531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.332544 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.332561 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.332937 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.436595 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.436660 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.436670 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.436696 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.436708 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.539329 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.539715 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.539867 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.540021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.540164 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.643602 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.644056 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.644157 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.644258 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.644387 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.746745 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.746793 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.746806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.746826 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.746839 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.849952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.850031 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.850055 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.850081 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.850099 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.873605 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:00 crc kubenswrapper[5065]: E1008 13:20:00.873812 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.953205 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.953274 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.953296 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.953326 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:00 crc kubenswrapper[5065]: I1008 13:20:00.953349 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:00Z","lastTransitionTime":"2025-10-08T13:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.057107 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.057153 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.057165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.057182 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.057193 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.160713 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.160779 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.160800 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.160826 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.160844 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.263448 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.263511 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.263532 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.263562 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.263585 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.366757 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.366797 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.366806 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.366821 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.366831 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.470985 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.471048 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.471065 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.471093 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.471112 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.573073 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.573112 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.573122 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.573136 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.573148 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.676358 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.676401 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.676432 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.676449 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.676463 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.778200 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.778245 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.778257 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.778274 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.778284 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.873196 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.873247 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:01 crc kubenswrapper[5065]: E1008 13:20:01.873321 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:01 crc kubenswrapper[5065]: E1008 13:20:01.873438 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.873509 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:01 crc kubenswrapper[5065]: E1008 13:20:01.873602 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.880221 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.880255 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.880272 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.880287 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.880298 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.982568 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.982643 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.982672 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.982693 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:01 crc kubenswrapper[5065]: I1008 13:20:01.982705 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:01Z","lastTransitionTime":"2025-10-08T13:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.085906 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.085998 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.086007 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.086022 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.086034 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.188005 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.188051 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.188068 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.188088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.188103 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.291974 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.292034 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.292057 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.292086 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.292107 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.394772 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.394840 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.394936 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.394971 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.394992 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.498285 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.498328 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.498337 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.498352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.498368 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.600849 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.600896 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.600907 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.600925 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.600940 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.703274 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.703352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.703369 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.703387 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.703397 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.805807 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.805859 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.805871 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.805891 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.805910 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.872486 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:02 crc kubenswrapper[5065]: E1008 13:20:02.872633 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.908949 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.909019 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.909042 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.909071 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:02 crc kubenswrapper[5065]: I1008 13:20:02.909091 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:02Z","lastTransitionTime":"2025-10-08T13:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.011142 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.011208 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.011224 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.011245 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.011260 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.113155 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.113190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.113198 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.113212 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.113222 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.216002 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.216099 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.216144 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.216178 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.216205 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.319195 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.319239 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.319253 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.319271 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.319284 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.421923 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.421993 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.422006 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.422021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.422301 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.525025 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.525075 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.525083 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.525098 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.525107 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.628236 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.628323 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.628333 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.628346 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.628355 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.730747 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.730805 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.730825 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.730850 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.730868 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.834199 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.834264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.834285 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.834331 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.834362 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.873639 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.873797 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.873794 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:03 crc kubenswrapper[5065]: E1008 13:20:03.873978 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:03 crc kubenswrapper[5065]: E1008 13:20:03.874125 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:03 crc kubenswrapper[5065]: E1008 13:20:03.874253 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.937154 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.937204 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.937218 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.937234 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:03 crc kubenswrapper[5065]: I1008 13:20:03.937246 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:03Z","lastTransitionTime":"2025-10-08T13:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.039808 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.039871 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.039888 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.039912 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.039930 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.142985 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.143033 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.143043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.143059 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.143069 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.246047 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.246120 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.246138 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.246163 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.246179 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.349264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.349304 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.349313 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.349329 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.349342 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.451376 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.451478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.451494 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.451511 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.451525 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.553937 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.553969 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.554048 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.554087 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.554098 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.656791 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.656831 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.656839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.656852 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.656861 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.759748 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.759793 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.759804 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.759819 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.759830 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.862668 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.862720 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.862736 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.862752 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.862762 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.873263 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:04 crc kubenswrapper[5065]: E1008 13:20:04.873389 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.964733 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.964763 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.964771 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.964782 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:04 crc kubenswrapper[5065]: I1008 13:20:04.964791 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:04Z","lastTransitionTime":"2025-10-08T13:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.067301 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.067392 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.067452 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.067483 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.067507 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.170452 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.170521 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.170546 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.170578 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.170600 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.272657 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.272710 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.272732 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.272755 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.272769 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.374759 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.374807 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.374816 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.374829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.374838 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.477128 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.477188 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.477204 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.477226 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.477241 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.580584 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.580659 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.580676 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.581127 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.581185 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.684402 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.684474 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.684490 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.684510 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.684525 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.787005 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.787051 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.787068 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.787089 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.787101 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.872880 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.872930 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.872903 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:05 crc kubenswrapper[5065]: E1008 13:20:05.873111 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:05 crc kubenswrapper[5065]: E1008 13:20:05.873281 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:05 crc kubenswrapper[5065]: E1008 13:20:05.873389 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.889998 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.890047 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.890061 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.890077 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.890089 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.992735 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.992818 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.992838 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.992870 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:05 crc kubenswrapper[5065]: I1008 13:20:05.992887 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:05Z","lastTransitionTime":"2025-10-08T13:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.095315 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.095397 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.095517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.095534 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.095547 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.198089 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.198129 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.198141 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.198157 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.198170 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.300966 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.301012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.301022 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.301041 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.301051 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.404689 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.404728 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.404737 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.404752 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.404761 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.507916 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.507967 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.507978 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.507994 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.508008 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.574600 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.574657 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.574678 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.574704 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.574721 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: E1008 13:20:06.597088 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:20:06Z is after 2025-08-24T17:21:41Z" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.601321 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.601345 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.601354 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.601366 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.601375 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: E1008 13:20:06.620618 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:20:06Z is after 2025-08-24T17:21:41Z" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.626333 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.626391 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.626408 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.626451 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.626467 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: E1008 13:20:06.645794 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:20:06Z is after 2025-08-24T17:21:41Z" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.651479 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.651516 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.651526 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.651543 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.651554 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: E1008 13:20:06.666392 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:20:06Z is after 2025-08-24T17:21:41Z" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.669671 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.669736 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.669744 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.669774 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.669787 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: E1008 13:20:06.680768 5065 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T13:20:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"137ca619-3348-4004-b5e9-6fba48af3fd0\\\",\\\"systemUUID\\\":\\\"1bc7a529-1398-49b6-b75f-648e257076b7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T13:20:06Z is after 2025-08-24T17:21:41Z" Oct 08 13:20:06 crc kubenswrapper[5065]: E1008 13:20:06.680962 5065 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.682954 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.683018 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.683033 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.683058 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.683069 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.786401 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.786462 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.786475 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.786493 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.786504 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.873329 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:06 crc kubenswrapper[5065]: E1008 13:20:06.873521 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.889032 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.889078 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.889100 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.889120 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.889134 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.992892 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.992946 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.992959 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.992978 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:06 crc kubenswrapper[5065]: I1008 13:20:06.992996 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:06Z","lastTransitionTime":"2025-10-08T13:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.095122 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.095161 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.095188 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.095202 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.095211 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.197662 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.197719 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.197733 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.197750 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.197760 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.301174 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.301262 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.301281 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.301306 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.301322 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.404228 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.404286 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.404300 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.404320 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.404332 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.507269 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.507319 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.507334 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.507352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.507363 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.610209 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.610273 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.610291 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.610314 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.610331 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.713026 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.713093 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.713110 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.713135 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.713151 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.815762 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.815793 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.815801 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.815814 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.815823 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.872503 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.872604 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.872605 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:07 crc kubenswrapper[5065]: E1008 13:20:07.872735 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:07 crc kubenswrapper[5065]: E1008 13:20:07.872851 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:07 crc kubenswrapper[5065]: E1008 13:20:07.873011 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.918004 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.918055 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.918076 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.918097 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:07 crc kubenswrapper[5065]: I1008 13:20:07.918113 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:07Z","lastTransitionTime":"2025-10-08T13:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.021169 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.021227 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.021253 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.021283 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.021304 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.124156 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.124262 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.124285 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.124314 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.124336 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.228207 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.228282 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.228299 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.228324 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.228341 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.331697 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.331765 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.331789 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.331820 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.331842 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.433653 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.433838 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.433861 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.433883 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.433898 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.536043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.536117 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.536169 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.536188 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.536200 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.639222 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.639295 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.639322 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.639352 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.639373 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.742104 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.742167 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.742189 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.742218 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.742240 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.844844 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.844922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.844945 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.844978 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.845005 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.872673 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:08 crc kubenswrapper[5065]: E1008 13:20:08.873244 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.874378 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:20:08 crc kubenswrapper[5065]: E1008 13:20:08.874729 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96g69_openshift-ovn-kubernetes(953c2ee2-f53f-4a77-8e47-2f7fc1aefc17)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.904827 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fdcv2" podStartSLOduration=91.904807799 podStartE2EDuration="1m31.904807799s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:08.904532641 +0000 UTC m=+110.681914498" watchObservedRunningTime="2025-10-08 13:20:08.904807799 +0000 UTC m=+110.682189556" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.944750 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.944734302 podStartE2EDuration="35.944734302s" podCreationTimestamp="2025-10-08 13:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:08.920922482 +0000 UTC m=+110.698304229" watchObservedRunningTime="2025-10-08 13:20:08.944734302 +0000 UTC m=+110.722116049" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.947456 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.947478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.947486 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.947517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.947526 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:08Z","lastTransitionTime":"2025-10-08T13:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.962624 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.962610512 podStartE2EDuration="1m31.962610512s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:08.944957458 +0000 UTC m=+110.722339215" watchObservedRunningTime="2025-10-08 13:20:08.962610512 +0000 UTC m=+110.739992269" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.979764 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.979749963 podStartE2EDuration="1m28.979749963s" podCreationTimestamp="2025-10-08 13:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:08.962855529 +0000 UTC m=+110.740237286" watchObservedRunningTime="2025-10-08 13:20:08.979749963 +0000 UTC m=+110.757131720" Oct 08 13:20:08 crc kubenswrapper[5065]: I1008 13:20:08.993065 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podStartSLOduration=91.99304353 podStartE2EDuration="1m31.99304353s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:08.979255079 +0000 UTC m=+110.756636856" watchObservedRunningTime="2025-10-08 13:20:08.99304353 +0000 UTC m=+110.770425297" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.049815 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.050076 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.050155 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.050233 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.050311 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.061767 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dkvkk" podStartSLOduration=92.061751877 podStartE2EDuration="1m32.061751877s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:09.044951525 +0000 UTC m=+110.822333312" watchObservedRunningTime="2025-10-08 13:20:09.061751877 +0000 UTC m=+110.839133634" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.062031 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8xgfx" podStartSLOduration=92.062028094 podStartE2EDuration="1m32.062028094s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:09.060991996 +0000 UTC m=+110.838373753" watchObservedRunningTime="2025-10-08 13:20:09.062028094 +0000 UTC m=+110.839409851" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.089275 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mzjf8" podStartSLOduration=91.089256136 podStartE2EDuration="1m31.089256136s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:09.07490021 +0000 UTC m=+110.852281967" watchObservedRunningTime="2025-10-08 13:20:09.089256136 +0000 UTC m=+110.866637883" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.097678 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7d2jj" podStartSLOduration=92.097663382 podStartE2EDuration="1m32.097663382s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:09.097144148 +0000 UTC m=+110.874525915" watchObservedRunningTime="2025-10-08 13:20:09.097663382 +0000 UTC m=+110.875045139" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.152402 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.152470 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.152481 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.152497 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.152509 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.160130 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=90.16011226 podStartE2EDuration="1m30.16011226s" podCreationTimestamp="2025-10-08 13:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:09.157528001 +0000 UTC m=+110.934909768" watchObservedRunningTime="2025-10-08 13:20:09.16011226 +0000 UTC m=+110.937494017" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.170006 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.169985206 podStartE2EDuration="54.169985206s" podCreationTimestamp="2025-10-08 13:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:09.169717848 +0000 UTC m=+110.947099635" watchObservedRunningTime="2025-10-08 13:20:09.169985206 +0000 UTC m=+110.947366973" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.255088 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.255152 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.255165 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.255179 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.255188 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.357600 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.357662 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.357684 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.357713 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.357735 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.459543 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.459586 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.459595 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.459609 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.459619 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.561813 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.562083 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.562152 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.562225 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.562312 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.664602 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.664874 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.665035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.665171 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.665314 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.768214 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.768248 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.768256 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.768289 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.768298 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.870350 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.870721 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.870829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.870956 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.871116 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.872737 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.872792 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.872745 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:09 crc kubenswrapper[5065]: E1008 13:20:09.872839 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:09 crc kubenswrapper[5065]: E1008 13:20:09.872914 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:09 crc kubenswrapper[5065]: E1008 13:20:09.872963 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.974018 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.974080 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.974091 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.974107 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:09 crc kubenswrapper[5065]: I1008 13:20:09.974118 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:09Z","lastTransitionTime":"2025-10-08T13:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.076627 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.076663 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.076674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.076689 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.076700 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.178880 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.178922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.178935 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.178951 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.178964 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.281390 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.281466 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.281477 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.281491 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.281500 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.383351 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.383387 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.383397 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.383432 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.383443 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.486498 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.486539 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.486551 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.486567 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.486581 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.588687 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.588756 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.588772 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.588788 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.588800 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.692338 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.692387 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.692397 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.692429 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.692439 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.795132 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.795181 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.795193 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.795212 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.795223 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.873140 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:10 crc kubenswrapper[5065]: E1008 13:20:10.873296 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.897466 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.897501 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.897515 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.897532 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.897547 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.999834 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.999894 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:10 crc kubenswrapper[5065]: I1008 13:20:10.999911 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:10.999930 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:10.999945 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:10Z","lastTransitionTime":"2025-10-08T13:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.102908 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.102953 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.102963 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.102981 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.102994 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.205438 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.205485 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.205496 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.205512 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.205522 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.308478 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.308529 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.308543 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.308565 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.308580 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.410950 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.411043 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.411075 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.411107 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.411130 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.513852 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.513898 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.513908 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.513922 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.513936 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.616803 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.616832 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.616839 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.616852 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.616860 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.719746 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.719784 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.719793 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.719808 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.719818 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.822705 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.822748 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.822759 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.822775 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.822787 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.873039 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.873062 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:11 crc kubenswrapper[5065]: E1008 13:20:11.873149 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.873040 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:11 crc kubenswrapper[5065]: E1008 13:20:11.873353 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:11 crc kubenswrapper[5065]: E1008 13:20:11.873559 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.925059 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.925092 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.925100 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.925115 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:11 crc kubenswrapper[5065]: I1008 13:20:11.925124 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:11Z","lastTransitionTime":"2025-10-08T13:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.027537 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.027787 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.027803 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.027822 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.027836 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.130621 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.130677 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.130697 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.130720 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.130736 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.236828 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.236888 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.236903 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.236924 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.236941 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.339826 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.339877 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.339893 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.339915 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.339949 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.389358 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/1.log" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.390141 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/0.log" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.390206 5065 generic.go:334] "Generic (PLEG): container finished" podID="ddc2ce1c-bf76-4663-a2d6-e518ff7a4678" containerID="bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b" exitCode=1 Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.390248 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerDied","Data":"bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.390297 5065 scope.go:117] "RemoveContainer" containerID="72ae1bec8b1068929b811eeda601bcaf07b19e2f5959f41437effa772fb49d4c" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.391186 5065 scope.go:117] "RemoveContainer" containerID="bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b" Oct 08 13:20:12 crc kubenswrapper[5065]: E1008 13:20:12.391764 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dkvkk_openshift-multus(ddc2ce1c-bf76-4663-a2d6-e518ff7a4678)\"" pod="openshift-multus/multus-dkvkk" podUID="ddc2ce1c-bf76-4663-a2d6-e518ff7a4678" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.441962 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.441991 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.441999 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.442012 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.442021 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.545599 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.545664 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.545674 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.545695 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.545707 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.649276 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.649341 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.649351 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.649373 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.649388 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.753009 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.753090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.753113 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.753144 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.753168 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.856392 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.856505 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.856531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.856563 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.856591 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.872756 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:12 crc kubenswrapper[5065]: E1008 13:20:12.872899 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.958728 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.958783 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.958802 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.958826 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:12 crc kubenswrapper[5065]: I1008 13:20:12.958885 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:12Z","lastTransitionTime":"2025-10-08T13:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.062363 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.062448 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.062465 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.062487 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.062504 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.165522 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.165558 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.165570 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.165586 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.165599 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.267822 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.267894 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.267911 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.267937 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.267954 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.370905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.370996 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.371021 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.371053 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.371078 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.396507 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/1.log" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.473464 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.473511 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.473520 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.473535 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.473543 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.575996 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.576042 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.576054 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.576075 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.576087 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.678851 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.678891 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.678900 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.678915 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.678926 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.781988 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.782023 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.782035 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.782056 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.782073 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.873460 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.873575 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.873607 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:13 crc kubenswrapper[5065]: E1008 13:20:13.873645 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:13 crc kubenswrapper[5065]: E1008 13:20:13.873730 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:13 crc kubenswrapper[5065]: E1008 13:20:13.873851 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.885183 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.885250 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.885273 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.885304 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.885326 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.988028 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.988070 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.988082 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.988101 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:13 crc kubenswrapper[5065]: I1008 13:20:13.988114 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:13Z","lastTransitionTime":"2025-10-08T13:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.091073 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.091144 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.091158 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.091176 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.091188 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.193913 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.193981 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.194006 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.194036 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.194055 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.296795 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.296881 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.296909 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.296943 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.296967 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.399548 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.399607 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.399625 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.399651 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.399674 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.502254 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.502290 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.502300 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.502317 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.502329 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.605479 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.605531 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.605542 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.605561 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.605574 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.708083 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.708121 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.708130 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.708144 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.708155 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.811214 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.811245 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.811291 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.811305 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.811313 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.873189 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:14 crc kubenswrapper[5065]: E1008 13:20:14.873631 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.915954 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.916023 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.916049 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.916090 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:14 crc kubenswrapper[5065]: I1008 13:20:14.916112 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:14Z","lastTransitionTime":"2025-10-08T13:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.019190 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.019251 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.019268 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.019293 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.019311 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.123771 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.123826 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.123842 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.123866 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.123884 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.226476 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.226509 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.226519 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.226557 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.226570 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.328842 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.328901 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.328919 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.328943 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.328960 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.433777 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.433829 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.433846 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.433871 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.433888 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.536479 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.536561 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.536581 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.536606 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.536627 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.640242 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.640542 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.640637 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.640761 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.640851 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.744508 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.744550 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.744558 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.744573 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.744585 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.847471 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.847517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.847527 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.847545 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.847555 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.872848 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.872928 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:15 crc kubenswrapper[5065]: E1008 13:20:15.872975 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.872935 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:15 crc kubenswrapper[5065]: E1008 13:20:15.873211 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:15 crc kubenswrapper[5065]: E1008 13:20:15.873462 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.949624 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.949667 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.949692 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.949711 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:15 crc kubenswrapper[5065]: I1008 13:20:15.949725 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:15Z","lastTransitionTime":"2025-10-08T13:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.053163 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.053264 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.053280 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.053297 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.053309 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.155517 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.155556 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.155566 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.155582 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.155593 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.258454 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.258494 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.258506 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.258542 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.258551 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.361633 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.361694 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.361710 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.361735 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.361752 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.464750 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.464796 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.464808 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.464830 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.464845 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.566905 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.566952 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.566965 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.566982 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.566993 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.669526 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.669575 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.669586 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.669603 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.669614 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.683980 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.684013 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.684022 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.684036 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.684044 5065 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T13:20:16Z","lastTransitionTime":"2025-10-08T13:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.754157 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6"] Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.754554 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.758234 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.758890 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.759256 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.760179 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.872981 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:16 crc kubenswrapper[5065]: E1008 13:20:16.873163 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.890151 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/56efc3c5-1c17-4e46-8469-e76d74448c26-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.890214 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/56efc3c5-1c17-4e46-8469-e76d74448c26-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.890242 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56efc3c5-1c17-4e46-8469-e76d74448c26-service-ca\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.890273 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56efc3c5-1c17-4e46-8469-e76d74448c26-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.890287 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56efc3c5-1c17-4e46-8469-e76d74448c26-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.994715 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/56efc3c5-1c17-4e46-8469-e76d74448c26-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.994891 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/56efc3c5-1c17-4e46-8469-e76d74448c26-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.994916 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56efc3c5-1c17-4e46-8469-e76d74448c26-service-ca\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.995029 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56efc3c5-1c17-4e46-8469-e76d74448c26-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.995057 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56efc3c5-1c17-4e46-8469-e76d74448c26-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.995097 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/56efc3c5-1c17-4e46-8469-e76d74448c26-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.995149 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/56efc3c5-1c17-4e46-8469-e76d74448c26-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:16 crc kubenswrapper[5065]: I1008 13:20:16.996142 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56efc3c5-1c17-4e46-8469-e76d74448c26-service-ca\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.001917 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56efc3c5-1c17-4e46-8469-e76d74448c26-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.010767 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56efc3c5-1c17-4e46-8469-e76d74448c26-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-km6j6\" (UID: \"56efc3c5-1c17-4e46-8469-e76d74448c26\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.067310 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" Oct 08 13:20:17 crc kubenswrapper[5065]: W1008 13:20:17.080611 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56efc3c5_1c17_4e46_8469_e76d74448c26.slice/crio-6eb8007adee7ad4f5a3bba966f95f021dc9dacffe46d650eea01b9ba9115586f WatchSource:0}: Error finding container 6eb8007adee7ad4f5a3bba966f95f021dc9dacffe46d650eea01b9ba9115586f: Status 404 returned error can't find the container with id 6eb8007adee7ad4f5a3bba966f95f021dc9dacffe46d650eea01b9ba9115586f Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.411820 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" event={"ID":"56efc3c5-1c17-4e46-8469-e76d74448c26","Type":"ContainerStarted","Data":"59d5348644678c41c6db3e9a77e1f0c01bf15c86846179d28abae63d09db6a9d"} Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.412561 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" event={"ID":"56efc3c5-1c17-4e46-8469-e76d74448c26","Type":"ContainerStarted","Data":"6eb8007adee7ad4f5a3bba966f95f021dc9dacffe46d650eea01b9ba9115586f"} Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.873398 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.873567 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:17 crc kubenswrapper[5065]: E1008 13:20:17.873602 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:17 crc kubenswrapper[5065]: E1008 13:20:17.873761 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:17 crc kubenswrapper[5065]: I1008 13:20:17.874104 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:17 crc kubenswrapper[5065]: E1008 13:20:17.874347 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:18 crc kubenswrapper[5065]: E1008 13:20:18.822275 5065 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 08 13:20:18 crc kubenswrapper[5065]: I1008 13:20:18.872531 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:18 crc kubenswrapper[5065]: E1008 13:20:18.874677 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:18 crc kubenswrapper[5065]: E1008 13:20:18.962165 5065 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 13:20:19 crc kubenswrapper[5065]: I1008 13:20:19.873188 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:19 crc kubenswrapper[5065]: I1008 13:20:19.873188 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:19 crc kubenswrapper[5065]: E1008 13:20:19.873403 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:19 crc kubenswrapper[5065]: I1008 13:20:19.873197 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:19 crc kubenswrapper[5065]: E1008 13:20:19.873525 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:19 crc kubenswrapper[5065]: E1008 13:20:19.873326 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:20 crc kubenswrapper[5065]: I1008 13:20:20.872691 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:20 crc kubenswrapper[5065]: E1008 13:20:20.873094 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:20 crc kubenswrapper[5065]: I1008 13:20:20.873347 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.426597 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/3.log" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.429729 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerStarted","Data":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.430163 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.568185 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-km6j6" podStartSLOduration=104.568125456 podStartE2EDuration="1m44.568125456s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:17.423369115 +0000 UTC m=+119.200750872" watchObservedRunningTime="2025-10-08 13:20:21.568125456 +0000 UTC m=+123.345507213" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.569043 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podStartSLOduration=104.569034941 podStartE2EDuration="1m44.569034941s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:21.562733051 +0000 UTC m=+123.340114808" watchObservedRunningTime="2025-10-08 13:20:21.569034941 +0000 UTC m=+123.346416698" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.873403 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.873556 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:21 crc kubenswrapper[5065]: E1008 13:20:21.873691 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:21 crc kubenswrapper[5065]: E1008 13:20:21.873873 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.874134 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:21 crc kubenswrapper[5065]: E1008 13:20:21.874281 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.958572 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6nwh2"] Oct 08 13:20:21 crc kubenswrapper[5065]: I1008 13:20:21.958886 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:21 crc kubenswrapper[5065]: E1008 13:20:21.959067 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:23 crc kubenswrapper[5065]: I1008 13:20:23.872842 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:23 crc kubenswrapper[5065]: I1008 13:20:23.872940 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:23 crc kubenswrapper[5065]: I1008 13:20:23.872960 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:23 crc kubenswrapper[5065]: I1008 13:20:23.872975 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:23 crc kubenswrapper[5065]: I1008 13:20:23.873332 5065 scope.go:117] "RemoveContainer" containerID="bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b" Oct 08 13:20:23 crc kubenswrapper[5065]: E1008 13:20:23.874132 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:23 crc kubenswrapper[5065]: E1008 13:20:23.874265 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:23 crc kubenswrapper[5065]: E1008 13:20:23.874300 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:23 crc kubenswrapper[5065]: E1008 13:20:23.874444 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:23 crc kubenswrapper[5065]: E1008 13:20:23.963471 5065 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 13:20:24 crc kubenswrapper[5065]: I1008 13:20:24.440830 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/1.log" Oct 08 13:20:24 crc kubenswrapper[5065]: I1008 13:20:24.440893 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerStarted","Data":"3fc3fa49d9469ddc9f0cf14a9709270dfe42e85b0357c77c10baa16acfeeb096"} Oct 08 13:20:25 crc kubenswrapper[5065]: I1008 13:20:25.873171 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:25 crc kubenswrapper[5065]: I1008 13:20:25.873202 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:25 crc kubenswrapper[5065]: E1008 13:20:25.874623 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:25 crc kubenswrapper[5065]: I1008 13:20:25.873259 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:25 crc kubenswrapper[5065]: I1008 13:20:25.873238 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:25 crc kubenswrapper[5065]: E1008 13:20:25.874778 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:25 crc kubenswrapper[5065]: E1008 13:20:25.874931 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:25 crc kubenswrapper[5065]: E1008 13:20:25.874999 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:27 crc kubenswrapper[5065]: I1008 13:20:27.369158 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:20:27 crc kubenswrapper[5065]: I1008 13:20:27.873476 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:27 crc kubenswrapper[5065]: I1008 13:20:27.873479 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:27 crc kubenswrapper[5065]: I1008 13:20:27.873528 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:27 crc kubenswrapper[5065]: E1008 13:20:27.874088 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6nwh2" podUID="c8a38e7c-bbc4-4255-ab4e-a056eb0655be" Oct 08 13:20:27 crc kubenswrapper[5065]: E1008 13:20:27.873887 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 13:20:27 crc kubenswrapper[5065]: I1008 13:20:27.873584 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:27 crc kubenswrapper[5065]: E1008 13:20:27.874155 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 13:20:27 crc kubenswrapper[5065]: E1008 13:20:27.874221 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.873063 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.873112 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.873136 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.873130 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.875364 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.875986 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.876505 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.876585 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.877937 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 08 13:20:29 crc kubenswrapper[5065]: I1008 13:20:29.877955 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.358762 5065 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.403968 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-w27qr"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.404820 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xr8vs"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.405634 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fplnt"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.406213 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.406576 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.407136 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.416243 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.417020 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8t8br"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.417701 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.417962 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.418000 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.418913 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.419136 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.419459 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.427470 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8gdt7"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.445355 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xpzqc"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.445905 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.445905 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.446773 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.448276 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449218 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449317 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449217 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449218 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449438 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449487 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449510 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449561 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449575 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449681 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449693 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449710 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449808 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449842 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449920 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.449990 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450150 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450338 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450532 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450546 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450630 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450661 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450688 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450756 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.450845 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451146 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451161 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451197 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451284 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451383 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451502 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451577 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451607 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zlcv8"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451792 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451792 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.452207 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451615 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451290 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451639 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451658 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451169 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451666 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451697 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451702 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451722 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.451737 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.453796 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.454442 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.465501 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.467651 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.467873 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468054 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468267 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468343 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468518 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468674 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468838 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468931 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468978 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.469037 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.468969 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.469185 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.469001 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.469294 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.469427 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.470384 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.470496 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.471568 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.472062 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.472304 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.473305 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.473817 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.474771 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.475317 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.477679 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.477921 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.478060 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.480411 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.497777 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.512493 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.513218 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.514571 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.515251 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.515292 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.520277 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.522453 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.523057 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.525297 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.525825 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.526221 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.526314 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.526457 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.526977 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.527136 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.528613 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.528767 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.528918 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529029 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529131 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529236 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529316 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529345 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529466 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529517 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529519 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529579 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529134 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529611 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.529921 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530058 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530142 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.531051 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6gx5j"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.534956 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2b6kx"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530254 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535296 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535316 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530286 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535549 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530291 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535598 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530317 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535781 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530333 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535857 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-lcdtm"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535783 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536033 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.530434 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536120 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.535943 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536219 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536391 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xx98"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536488 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536737 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536766 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536824 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.531141 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.537009 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lcdtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.537050 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.537067 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.536740 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.537532 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.538582 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.540402 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx54q"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.541046 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.541603 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rnnlg"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.543090 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.546544 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.547851 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s52g7"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.548542 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.550702 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.551733 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.552303 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fplnt"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.553331 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.557901 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.558494 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.558976 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.559248 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.559273 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8zlcn"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.559437 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.564157 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.566055 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.576647 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nnvb5"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.577198 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.577393 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.578588 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xr8vs"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.579627 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.581460 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zlcv8"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.583123 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w27qr"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.584343 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.585608 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx54q"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.586943 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lcdtm"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.588474 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.588540 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.589456 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.590897 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8gdt7"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.592512 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.594040 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.595158 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8t8br"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.597069 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.597689 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.599108 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2rxtm"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.599716 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.600586 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6gx5j"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.601980 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.604311 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.605403 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2b6kx"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.606791 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.608301 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.608960 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rnnlg"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.610071 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.611593 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xx98"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.612750 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.613967 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614010 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614035 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7fv\" (UniqueName: \"kubernetes.io/projected/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-kube-api-access-7t7fv\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614058 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07e8eaa7-dc9d-4581-a962-554de51f6137-audit-dir\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614162 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614207 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f86b9043-eb02-42e7-b53b-2e684dd2ad26-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-87chs\" (UID: \"f86b9043-eb02-42e7-b53b-2e684dd2ad26\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614233 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-audit\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614263 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614281 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-oauth-config\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614295 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-service-ca\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614310 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-etcd-serving-ca\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614328 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614353 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-config\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614373 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-policies\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614426 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6jm\" (UniqueName: \"kubernetes.io/projected/09664f6d-52dd-48af-b1ad-d19e58094ecc-kube-api-access-bb6jm\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614455 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42132cd2-ec8f-47e2-8011-6f39c454977f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614479 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-etcd-client\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614500 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614518 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614537 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/af14fd36-37c4-43d6-aabc-722f41b42da1-machine-approver-tls\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614557 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzw89\" (UniqueName: \"kubernetes.io/projected/d7443ea7-16f6-449c-baea-52a1facd0967-kube-api-access-qzw89\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614578 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7443ea7-16f6-449c-baea-52a1facd0967-trusted-ca\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614600 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-client-ca\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614621 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c300f213-f82e-4f8c-8402-9e0af05d049c-proxy-tls\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614641 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-etcd-client\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614661 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614681 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsgh\" (UniqueName: \"kubernetes.io/projected/07e8eaa7-dc9d-4581-a962-554de51f6137-kube-api-access-5bsgh\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614700 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-trusted-ca-bundle\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614718 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-serving-cert\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614752 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwgz\" (UniqueName: \"kubernetes.io/projected/2f5abc48-dd97-49da-8c32-b388116c092a-kube-api-access-hxwgz\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614774 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7443ea7-16f6-449c-baea-52a1facd0967-serving-cert\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614796 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61abc989-efa8-41c2-ae46-1c7015e76aee-audit-dir\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614818 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af14fd36-37c4-43d6-aabc-722f41b42da1-auth-proxy-config\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614840 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af14fd36-37c4-43d6-aabc-722f41b42da1-config\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614861 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-oauth-serving-cert\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614891 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kqv\" (UniqueName: \"kubernetes.io/projected/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-kube-api-access-62kqv\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614914 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132cd2-ec8f-47e2-8011-6f39c454977f-config\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614933 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7k5v\" (UniqueName: \"kubernetes.io/projected/c300f213-f82e-4f8c-8402-9e0af05d049c-kube-api-access-q7k5v\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614960 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtzw\" (UniqueName: \"kubernetes.io/projected/6877d346-fa92-428a-859c-218fdfe5ca4f-kube-api-access-5jtzw\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614979 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5abc48-dd97-49da-8c32-b388116c092a-serving-cert\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.614997 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qlkc\" (UniqueName: \"kubernetes.io/projected/025e6f00-f56b-4674-9cf8-6ddb57afe15f-kube-api-access-7qlkc\") pod \"control-plane-machine-set-operator-78cbb6b69f-c8bkc\" (UID: \"025e6f00-f56b-4674-9cf8-6ddb57afe15f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615014 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8tgv\" (UniqueName: \"kubernetes.io/projected/af14fd36-37c4-43d6-aabc-722f41b42da1-kube-api-access-m8tgv\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615033 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs29m\" (UniqueName: \"kubernetes.io/projected/cf77c43f-8ce4-40aa-81bf-d2d40068edcc-kube-api-access-qs29m\") pod \"migrator-59844c95c7-kw2gp\" (UID: \"cf77c43f-8ce4-40aa-81bf-d2d40068edcc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615054 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615076 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/025e6f00-f56b-4674-9cf8-6ddb57afe15f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c8bkc\" (UID: \"025e6f00-f56b-4674-9cf8-6ddb57afe15f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615100 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2f5abc48-dd97-49da-8c32-b388116c092a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615119 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4j2m\" (UniqueName: \"kubernetes.io/projected/61abc989-efa8-41c2-ae46-1c7015e76aee-kube-api-access-w4j2m\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615143 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq76r\" (UniqueName: \"kubernetes.io/projected/f86b9043-eb02-42e7-b53b-2e684dd2ad26-kube-api-access-nq76r\") pod \"cluster-samples-operator-665b6dd947-87chs\" (UID: \"f86b9043-eb02-42e7-b53b-2e684dd2ad26\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615162 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7443ea7-16f6-449c-baea-52a1facd0967-config\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615184 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615202 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-config\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615222 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-encryption-config\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615258 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562d8067-863a-4644-9fd6-f51281a2191b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615282 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-client-ca\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615305 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615338 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615359 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-audit-policies\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615380 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61abc989-efa8-41c2-ae46-1c7015e76aee-node-pullsecrets\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615401 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-config\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615452 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-serving-cert\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615490 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42132cd2-ec8f-47e2-8011-6f39c454977f-images\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615514 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562d8067-863a-4644-9fd6-f51281a2191b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615560 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-encryption-config\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615589 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-dir\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615613 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615633 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efd7ad79-f03d-486a-88d8-8be245697463-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zlcv8\" (UID: \"efd7ad79-f03d-486a-88d8-8be245697463\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615653 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsjc\" (UniqueName: \"kubernetes.io/projected/562d8067-863a-4644-9fd6-f51281a2191b-kube-api-access-8hsjc\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615661 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s52g7"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615674 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42rkq\" (UniqueName: \"kubernetes.io/projected/efd7ad79-f03d-486a-88d8-8be245697463-kube-api-access-42rkq\") pod \"multus-admission-controller-857f4d67dd-zlcv8\" (UID: \"efd7ad79-f03d-486a-88d8-8be245697463\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615694 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615696 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8v8\" (UniqueName: \"kubernetes.io/projected/42132cd2-ec8f-47e2-8011-6f39c454977f-kube-api-access-zd8v8\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615728 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09664f6d-52dd-48af-b1ad-d19e58094ecc-serving-cert\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615748 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615769 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-console-config\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615851 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615875 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-serving-cert\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615955 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-serving-cert\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.615984 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-image-import-ca\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.616016 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c300f213-f82e-4f8c-8402-9e0af05d049c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.617544 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.618740 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nnvb5"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.620863 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.622434 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xpzqc"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.623834 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.629696 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.630117 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h8cnh"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.630711 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.633662 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h8cnh"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.636873 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.637819 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.638885 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.640043 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6sglj"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.641035 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.648828 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-g75m7"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.649682 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.649892 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.650050 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g75m7"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.651117 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6sglj"] Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.672650 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.689936 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.708908 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716523 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6jm\" (UniqueName: \"kubernetes.io/projected/09664f6d-52dd-48af-b1ad-d19e58094ecc-kube-api-access-bb6jm\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716560 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42132cd2-ec8f-47e2-8011-6f39c454977f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716593 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2648eeb-e556-43bd-a3de-ace83e540571-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716617 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716641 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716735 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzw89\" (UniqueName: \"kubernetes.io/projected/d7443ea7-16f6-449c-baea-52a1facd0967-kube-api-access-qzw89\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716809 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86607750-37d4-45d3-bc51-8633912e77fd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716840 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-client-ca\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716872 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c300f213-f82e-4f8c-8402-9e0af05d049c-proxy-tls\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716932 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de258063-13a0-4a3d-93f4-b39fd81902cb-config\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.716956 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-config\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717013 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717071 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h62kr\" (UniqueName: \"kubernetes.io/projected/28c60830-7cae-45ed-bbe5-edbb83a24e87-kube-api-access-h62kr\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717102 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7443ea7-16f6-449c-baea-52a1facd0967-serving-cert\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717130 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4278c38-600b-497f-927d-04791c551470-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717153 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717180 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af14fd36-37c4-43d6-aabc-722f41b42da1-auth-proxy-config\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717200 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af14fd36-37c4-43d6-aabc-722f41b42da1-config\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717220 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-config\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717245 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7k5v\" (UniqueName: \"kubernetes.io/projected/c300f213-f82e-4f8c-8402-9e0af05d049c-kube-api-access-q7k5v\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717269 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de258063-13a0-4a3d-93f4-b39fd81902cb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717293 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/36606460-fa2c-4254-acd5-9de143291cca-tmpfs\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717317 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5abc48-dd97-49da-8c32-b388116c092a-serving-cert\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717342 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtcj5\" (UniqueName: \"kubernetes.io/projected/6110169d-e524-4d26-a6a5-514ee5384554-kube-api-access-jtcj5\") pod \"package-server-manager-789f6589d5-94t24\" (UID: \"6110169d-e524-4d26-a6a5-514ee5384554\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717368 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8tgv\" (UniqueName: \"kubernetes.io/projected/af14fd36-37c4-43d6-aabc-722f41b42da1-kube-api-access-m8tgv\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717392 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86607750-37d4-45d3-bc51-8633912e77fd-config\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717437 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqts\" (UniqueName: \"kubernetes.io/projected/939719c1-bfcc-469b-a627-627761c67f47-kube-api-access-6jqts\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717440 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717465 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2f5abc48-dd97-49da-8c32-b388116c092a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717493 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7l2\" (UniqueName: \"kubernetes.io/projected/b719c48b-49ca-4947-8e2f-77523c4360ac-kube-api-access-gf7l2\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717539 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de258063-13a0-4a3d-93f4-b39fd81902cb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717558 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86607750-37d4-45d3-bc51-8633912e77fd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717574 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-serving-cert\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717597 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-config\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717613 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea1c820-2ae9-4b81-874b-3620ffa07f72-serving-cert\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717638 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-config\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717655 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-encryption-config\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717682 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717699 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717713 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717733 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717750 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-config\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717764 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-client-ca\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717769 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxxq9\" (UniqueName: \"kubernetes.io/projected/c2648eeb-e556-43bd-a3de-ace83e540571-kube-api-access-nxxq9\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717814 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfkr5\" (UniqueName: \"kubernetes.io/projected/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-kube-api-access-gfkr5\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717842 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42132cd2-ec8f-47e2-8011-6f39c454977f-images\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717864 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562d8067-863a-4644-9fd6-f51281a2191b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717886 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-encryption-config\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717911 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42rkq\" (UniqueName: \"kubernetes.io/projected/efd7ad79-f03d-486a-88d8-8be245697463-kube-api-access-42rkq\") pod \"multus-admission-controller-857f4d67dd-zlcv8\" (UID: \"efd7ad79-f03d-486a-88d8-8be245697463\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717937 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsjc\" (UniqueName: \"kubernetes.io/projected/562d8067-863a-4644-9fd6-f51281a2191b-kube-api-access-8hsjc\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717971 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b719c48b-49ca-4947-8e2f-77523c4360ac-secret-volume\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.717995 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-images\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718018 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8v8\" (UniqueName: \"kubernetes.io/projected/42132cd2-ec8f-47e2-8011-6f39c454977f-kube-api-access-zd8v8\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718039 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-service-ca\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718061 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09664f6d-52dd-48af-b1ad-d19e58094ecc-serving-cert\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718086 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2648eeb-e556-43bd-a3de-ace83e540571-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718109 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-image-import-ca\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718130 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-profile-collector-cert\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718153 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-proxy-tls\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718173 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/feb22448-6135-462d-91a3-66851678143d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718208 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f86b9043-eb02-42e7-b53b-2e684dd2ad26-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-87chs\" (UID: \"f86b9043-eb02-42e7-b53b-2e684dd2ad26\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718231 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-audit\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718259 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkzqf\" (UniqueName: \"kubernetes.io/projected/36606460-fa2c-4254-acd5-9de143291cca-kube-api-access-nkzqf\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718284 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/939719c1-bfcc-469b-a627-627761c67f47-srv-cert\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718316 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718352 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-etcd-serving-ca\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718375 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-stats-auth\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718398 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feb22448-6135-462d-91a3-66851678143d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718442 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-certs\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718466 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc56d\" (UniqueName: \"kubernetes.io/projected/7e38b84d-2101-41b3-b75a-45d06288470e-kube-api-access-lc56d\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718489 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-ca\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718512 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgt2h\" (UniqueName: \"kubernetes.io/projected/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-kube-api-access-hgt2h\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718517 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af14fd36-37c4-43d6-aabc-722f41b42da1-config\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718537 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-etcd-client\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718563 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/af14fd36-37c4-43d6-aabc-722f41b42da1-machine-approver-tls\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718585 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7443ea7-16f6-449c-baea-52a1facd0967-trusted-ca\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718607 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2648eeb-e556-43bd-a3de-ace83e540571-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718631 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-etcd-client\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718653 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718678 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwzt\" (UniqueName: \"kubernetes.io/projected/b4278c38-600b-497f-927d-04791c551470-kube-api-access-prwzt\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718702 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-service-ca-bundle\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718725 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718745 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqf6x\" (UniqueName: \"kubernetes.io/projected/feb22448-6135-462d-91a3-66851678143d-kube-api-access-tqf6x\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718763 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bsgh\" (UniqueName: \"kubernetes.io/projected/07e8eaa7-dc9d-4581-a962-554de51f6137-kube-api-access-5bsgh\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718789 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-trusted-ca-bundle\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718809 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-serving-cert\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718830 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwgz\" (UniqueName: \"kubernetes.io/projected/2f5abc48-dd97-49da-8c32-b388116c092a-kube-api-access-hxwgz\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718850 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61abc989-efa8-41c2-ae46-1c7015e76aee-audit-dir\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718873 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5tz7\" (UniqueName: \"kubernetes.io/projected/e227258a-5822-472d-a151-aa7c07951330-kube-api-access-z5tz7\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718899 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-default-certificate\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718924 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-oauth-serving-cert\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718946 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e227258a-5822-472d-a151-aa7c07951330-trusted-ca\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718969 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b17ef7c4-a962-4759-9441-33d28b384b4e-metrics-tls\") pod \"dns-operator-744455d44c-2b6kx\" (UID: \"b17ef7c4-a962-4759-9441-33d28b384b4e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719002 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kqv\" (UniqueName: \"kubernetes.io/projected/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-kube-api-access-62kqv\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719023 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132cd2-ec8f-47e2-8011-6f39c454977f-config\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719045 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-signing-key\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719068 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtzw\" (UniqueName: \"kubernetes.io/projected/6877d346-fa92-428a-859c-218fdfe5ca4f-kube-api-access-5jtzw\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719089 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/939719c1-bfcc-469b-a627-627761c67f47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719111 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2b92\" (UniqueName: \"kubernetes.io/projected/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-kube-api-access-t2b92\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719133 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696bcbce-29a9-4686-9ac0-e5af4558fc82-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719162 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qlkc\" (UniqueName: \"kubernetes.io/projected/025e6f00-f56b-4674-9cf8-6ddb57afe15f-kube-api-access-7qlkc\") pod \"control-plane-machine-set-operator-78cbb6b69f-c8bkc\" (UID: \"025e6f00-f56b-4674-9cf8-6ddb57afe15f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719180 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs29m\" (UniqueName: \"kubernetes.io/projected/cf77c43f-8ce4-40aa-81bf-d2d40068edcc-kube-api-access-qs29m\") pod \"migrator-59844c95c7-kw2gp\" (UID: \"cf77c43f-8ce4-40aa-81bf-d2d40068edcc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719200 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgjp\" (UniqueName: \"kubernetes.io/projected/b923335b-a2b2-4919-909d-70a6d141c798-kube-api-access-xwgjp\") pod \"downloads-7954f5f757-lcdtm\" (UID: \"b923335b-a2b2-4919-909d-70a6d141c798\") " pod="openshift-console/downloads-7954f5f757-lcdtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719224 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719248 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/025e6f00-f56b-4674-9cf8-6ddb57afe15f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c8bkc\" (UID: \"025e6f00-f56b-4674-9cf8-6ddb57afe15f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719274 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4j2m\" (UniqueName: \"kubernetes.io/projected/61abc989-efa8-41c2-ae46-1c7015e76aee-kube-api-access-w4j2m\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719293 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq76r\" (UniqueName: \"kubernetes.io/projected/f86b9043-eb02-42e7-b53b-2e684dd2ad26-kube-api-access-nq76r\") pod \"cluster-samples-operator-665b6dd947-87chs\" (UID: \"f86b9043-eb02-42e7-b53b-2e684dd2ad26\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719314 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-client\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719334 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-node-bootstrap-token\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719355 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7443ea7-16f6-449c-baea-52a1facd0967-config\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719376 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e227258a-5822-472d-a151-aa7c07951330-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719396 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-metrics-certs\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719438 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6110169d-e524-4d26-a6a5-514ee5384554-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-94t24\" (UID: \"6110169d-e524-4d26-a6a5-514ee5384554\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719463 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719485 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562d8067-863a-4644-9fd6-f51281a2191b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719508 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-serving-cert\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719529 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719551 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-client-ca\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719562 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-config\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719571 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-srv-cert\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719604 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-audit-policies\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719667 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61abc989-efa8-41c2-ae46-1c7015e76aee-node-pullsecrets\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719693 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-apiservice-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719745 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-serving-cert\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719771 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mnn\" (UniqueName: \"kubernetes.io/projected/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-kube-api-access-72mnn\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719819 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-signing-cabundle\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719844 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzb4\" (UniqueName: \"kubernetes.io/projected/b17ef7c4-a962-4759-9441-33d28b384b4e-kube-api-access-ttzb4\") pod \"dns-operator-744455d44c-2b6kx\" (UID: \"b17ef7c4-a962-4759-9441-33d28b384b4e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719866 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-webhook-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.719916 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-dir\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721399 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-trusted-ca-bundle\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721431 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721454 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721574 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42132cd2-ec8f-47e2-8011-6f39c454977f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721815 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721851 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efd7ad79-f03d-486a-88d8-8be245697463-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zlcv8\" (UID: \"efd7ad79-f03d-486a-88d8-8be245697463\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721881 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721892 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7443ea7-16f6-449c-baea-52a1facd0967-serving-cert\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721903 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-console-config\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.721982 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e227258a-5822-472d-a151-aa7c07951330-metrics-tls\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722015 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722039 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-serving-cert\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722066 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-serving-cert\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722097 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c300f213-f82e-4f8c-8402-9e0af05d049c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722122 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696bcbce-29a9-4686-9ac0-e5af4558fc82-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722150 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722182 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722208 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7fv\" (UniqueName: \"kubernetes.io/projected/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-kube-api-access-7t7fv\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722235 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbdvz\" (UniqueName: \"kubernetes.io/projected/2ea1c820-2ae9-4b81-874b-3620ffa07f72-kube-api-access-kbdvz\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722260 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhd9c\" (UniqueName: \"kubernetes.io/projected/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-kube-api-access-qhd9c\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722286 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07e8eaa7-dc9d-4581-a962-554de51f6137-audit-dir\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722303 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-config\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722317 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4hw\" (UniqueName: \"kubernetes.io/projected/696bcbce-29a9-4686-9ac0-e5af4558fc82-kube-api-access-sl4hw\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722347 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722369 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4278c38-600b-497f-927d-04791c551470-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722392 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-oauth-config\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722428 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-service-ca\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722454 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c60830-7cae-45ed-bbe5-edbb83a24e87-service-ca-bundle\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722482 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722501 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-config\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.722518 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-policies\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.723293 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2f5abc48-dd97-49da-8c32-b388116c092a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.723462 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-console-config\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.723478 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.723577 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f5abc48-dd97-49da-8c32-b388116c092a-serving-cert\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.723984 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61abc989-efa8-41c2-ae46-1c7015e76aee-audit-dir\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.723986 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42132cd2-ec8f-47e2-8011-6f39c454977f-images\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.724020 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c300f213-f82e-4f8c-8402-9e0af05d049c-proxy-tls\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.724064 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-policies\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.718231 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af14fd36-37c4-43d6-aabc-722f41b42da1-auth-proxy-config\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.724484 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-image-import-ca\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.724551 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7443ea7-16f6-449c-baea-52a1facd0967-config\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.724944 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7443ea7-16f6-449c-baea-52a1facd0967-trusted-ca\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.724979 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-oauth-serving-cert\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.725064 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.725102 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562d8067-863a-4644-9fd6-f51281a2191b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.725190 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61abc989-efa8-41c2-ae46-1c7015e76aee-node-pullsecrets\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.725275 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132cd2-ec8f-47e2-8011-6f39c454977f-config\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.725858 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-config\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.725921 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-client-ca\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.726408 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09664f6d-52dd-48af-b1ad-d19e58094ecc-serving-cert\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.726499 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-audit\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.726513 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c300f213-f82e-4f8c-8402-9e0af05d049c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.726556 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727004 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/61abc989-efa8-41c2-ae46-1c7015e76aee-etcd-serving-ca\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727215 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-service-ca\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727256 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727379 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-serving-cert\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727403 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-audit-policies\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727627 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-encryption-config\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727778 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/af14fd36-37c4-43d6-aabc-722f41b42da1-machine-approver-tls\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727830 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07e8eaa7-dc9d-4581-a962-554de51f6137-audit-dir\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.727955 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/07e8eaa7-dc9d-4581-a962-554de51f6137-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.728194 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.728261 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-dir\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.728948 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/025e6f00-f56b-4674-9cf8-6ddb57afe15f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c8bkc\" (UID: \"025e6f00-f56b-4674-9cf8-6ddb57afe15f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.729166 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.729388 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-etcd-client\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.729491 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.730088 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.730219 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-encryption-config\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.730576 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07e8eaa7-dc9d-4581-a962-554de51f6137-serving-cert\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.730722 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562d8067-863a-4644-9fd6-f51281a2191b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.730930 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.731087 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.731687 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-serving-cert\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.732079 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efd7ad79-f03d-486a-88d8-8be245697463-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zlcv8\" (UID: \"efd7ad79-f03d-486a-88d8-8be245697463\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.732239 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.732319 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-oauth-config\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.732394 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/61abc989-efa8-41c2-ae46-1c7015e76aee-etcd-client\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.732957 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f86b9043-eb02-42e7-b53b-2e684dd2ad26-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-87chs\" (UID: \"f86b9043-eb02-42e7-b53b-2e684dd2ad26\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.733254 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.734659 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-serving-cert\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.748289 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.768315 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.794214 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.808763 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824242 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkzqf\" (UniqueName: \"kubernetes.io/projected/36606460-fa2c-4254-acd5-9de143291cca-kube-api-access-nkzqf\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824292 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/939719c1-bfcc-469b-a627-627761c67f47-srv-cert\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824335 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-stats-auth\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824579 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feb22448-6135-462d-91a3-66851678143d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824606 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-certs\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824632 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc56d\" (UniqueName: \"kubernetes.io/projected/7e38b84d-2101-41b3-b75a-45d06288470e-kube-api-access-lc56d\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824794 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-ca\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824888 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgt2h\" (UniqueName: \"kubernetes.io/projected/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-kube-api-access-hgt2h\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.824918 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2648eeb-e556-43bd-a3de-ace83e540571-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.825024 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwzt\" (UniqueName: \"kubernetes.io/projected/b4278c38-600b-497f-927d-04791c551470-kube-api-access-prwzt\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826377 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2648eeb-e556-43bd-a3de-ace83e540571-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826444 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-service-ca-bundle\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826487 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826509 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqf6x\" (UniqueName: \"kubernetes.io/projected/feb22448-6135-462d-91a3-66851678143d-kube-api-access-tqf6x\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826627 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5tz7\" (UniqueName: \"kubernetes.io/projected/e227258a-5822-472d-a151-aa7c07951330-kube-api-access-z5tz7\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826757 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-default-certificate\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826788 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e227258a-5822-472d-a151-aa7c07951330-trusted-ca\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.826911 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b17ef7c4-a962-4759-9441-33d28b384b4e-metrics-tls\") pod \"dns-operator-744455d44c-2b6kx\" (UID: \"b17ef7c4-a962-4759-9441-33d28b384b4e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827364 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-signing-key\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827431 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/939719c1-bfcc-469b-a627-627761c67f47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827459 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2b92\" (UniqueName: \"kubernetes.io/projected/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-kube-api-access-t2b92\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827484 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696bcbce-29a9-4686-9ac0-e5af4558fc82-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827662 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwgjp\" (UniqueName: \"kubernetes.io/projected/b923335b-a2b2-4919-909d-70a6d141c798-kube-api-access-xwgjp\") pod \"downloads-7954f5f757-lcdtm\" (UID: \"b923335b-a2b2-4919-909d-70a6d141c798\") " pod="openshift-console/downloads-7954f5f757-lcdtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827709 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-client\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827732 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-node-bootstrap-token\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827758 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e227258a-5822-472d-a151-aa7c07951330-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827783 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-metrics-certs\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827808 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6110169d-e524-4d26-a6a5-514ee5384554-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-94t24\" (UID: \"6110169d-e524-4d26-a6a5-514ee5384554\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827838 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-serving-cert\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827865 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827889 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-srv-cert\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827914 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-apiservice-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827940 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72mnn\" (UniqueName: \"kubernetes.io/projected/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-kube-api-access-72mnn\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827962 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-signing-cabundle\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.827987 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzb4\" (UniqueName: \"kubernetes.io/projected/b17ef7c4-a962-4759-9441-33d28b384b4e-kube-api-access-ttzb4\") pod \"dns-operator-744455d44c-2b6kx\" (UID: \"b17ef7c4-a962-4759-9441-33d28b384b4e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828027 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-webhook-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828060 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e227258a-5822-472d-a151-aa7c07951330-metrics-tls\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828096 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696bcbce-29a9-4686-9ac0-e5af4558fc82-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828130 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbdvz\" (UniqueName: \"kubernetes.io/projected/2ea1c820-2ae9-4b81-874b-3620ffa07f72-kube-api-access-kbdvz\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828153 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhd9c\" (UniqueName: \"kubernetes.io/projected/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-kube-api-access-qhd9c\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828176 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl4hw\" (UniqueName: \"kubernetes.io/projected/696bcbce-29a9-4686-9ac0-e5af4558fc82-kube-api-access-sl4hw\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828203 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4278c38-600b-497f-927d-04791c551470-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828229 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c60830-7cae-45ed-bbe5-edbb83a24e87-service-ca-bundle\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828267 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2648eeb-e556-43bd-a3de-ace83e540571-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828304 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86607750-37d4-45d3-bc51-8633912e77fd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828341 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de258063-13a0-4a3d-93f4-b39fd81902cb-config\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828363 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-config\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828391 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828442 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h62kr\" (UniqueName: \"kubernetes.io/projected/28c60830-7cae-45ed-bbe5-edbb83a24e87-kube-api-access-h62kr\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828470 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4278c38-600b-497f-927d-04791c551470-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828494 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828518 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-config\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828531 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828548 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de258063-13a0-4a3d-93f4-b39fd81902cb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828565 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828585 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/36606460-fa2c-4254-acd5-9de143291cca-tmpfs\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828607 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtcj5\" (UniqueName: \"kubernetes.io/projected/6110169d-e524-4d26-a6a5-514ee5384554-kube-api-access-jtcj5\") pod \"package-server-manager-789f6589d5-94t24\" (UID: \"6110169d-e524-4d26-a6a5-514ee5384554\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828636 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86607750-37d4-45d3-bc51-8633912e77fd-config\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828657 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqts\" (UniqueName: \"kubernetes.io/projected/939719c1-bfcc-469b-a627-627761c67f47-kube-api-access-6jqts\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828680 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7l2\" (UniqueName: \"kubernetes.io/projected/b719c48b-49ca-4947-8e2f-77523c4360ac-kube-api-access-gf7l2\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828707 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de258063-13a0-4a3d-93f4-b39fd81902cb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828723 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86607750-37d4-45d3-bc51-8633912e77fd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828739 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-serving-cert\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828757 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-config\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828774 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea1c820-2ae9-4b81-874b-3620ffa07f72-serving-cert\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828808 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828841 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828868 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxxq9\" (UniqueName: \"kubernetes.io/projected/c2648eeb-e556-43bd-a3de-ace83e540571-kube-api-access-nxxq9\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828887 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfkr5\" (UniqueName: \"kubernetes.io/projected/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-kube-api-access-gfkr5\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828920 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b719c48b-49ca-4947-8e2f-77523c4360ac-secret-volume\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828937 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-images\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828957 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-service-ca\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828976 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2648eeb-e556-43bd-a3de-ace83e540571-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.828993 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-profile-collector-cert\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.829009 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-proxy-tls\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.829027 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/feb22448-6135-462d-91a3-66851678143d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.829574 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/36606460-fa2c-4254-acd5-9de143291cca-tmpfs\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.830194 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/696bcbce-29a9-4686-9ac0-e5af4558fc82-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.830506 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696bcbce-29a9-4686-9ac0-e5af4558fc82-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.830609 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-images\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.830857 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4278c38-600b-497f-927d-04791c551470-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.831307 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b17ef7c4-a962-4759-9441-33d28b384b4e-metrics-tls\") pod \"dns-operator-744455d44c-2b6kx\" (UID: \"b17ef7c4-a962-4759-9441-33d28b384b4e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.832353 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-srv-cert\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.832372 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4278c38-600b-497f-927d-04791c551470-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.832872 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-profile-collector-cert\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.833293 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/939719c1-bfcc-469b-a627-627761c67f47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.833488 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b719c48b-49ca-4947-8e2f-77523c4360ac-secret-volume\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.833765 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-proxy-tls\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.849468 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.868726 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.877674 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/939719c1-bfcc-469b-a627-627761c67f47-srv-cert\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.889086 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.909682 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.925035 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2648eeb-e556-43bd-a3de-ace83e540571-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.932092 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.936319 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea1c820-2ae9-4b81-874b-3620ffa07f72-serving-cert\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.949281 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.969030 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.988898 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 08 13:20:37 crc kubenswrapper[5065]: I1008 13:20:37.990564 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-config\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.008808 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.038211 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.041388 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.048794 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.069685 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.089901 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.095873 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ea1c820-2ae9-4b81-874b-3620ffa07f72-service-ca-bundle\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.109097 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.115527 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e227258a-5822-472d-a151-aa7c07951330-metrics-tls\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.128990 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.169654 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.170665 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.178189 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e227258a-5822-472d-a151-aa7c07951330-trusted-ca\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.189318 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.209326 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.214067 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86607750-37d4-45d3-bc51-8633912e77fd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.228956 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.248586 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.268229 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.273157 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/feb22448-6135-462d-91a3-66851678143d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.295965 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.306351 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feb22448-6135-462d-91a3-66851678143d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.309267 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.311177 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86607750-37d4-45d3-bc51-8633912e77fd-config\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.329445 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.349516 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.369438 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.389711 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.412376 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.421785 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.430035 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.441464 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.449689 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.468822 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.488740 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.497118 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-ca\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.528845 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.543242 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-client\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.547206 5065 request.go:700] Waited for 1.005240299s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0 Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.549375 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.551991 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-config\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.569150 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.571907 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-etcd-service-ca\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.589642 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.609396 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.612986 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-serving-cert\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.629625 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.648541 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.668686 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.680788 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-signing-key\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.690625 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.701283 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-signing-cabundle\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.709847 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.729064 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.734652 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6110169d-e524-4d26-a6a5-514ee5384554-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-94t24\" (UID: \"6110169d-e524-4d26-a6a5-514ee5384554\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.749879 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.764712 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-serving-cert\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.768495 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.788227 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.790391 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-config\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.809615 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.825474 5065 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.825476 5065 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.825595 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-stats-auth podName:28c60830-7cae-45ed-bbe5-edbb83a24e87 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.325570421 +0000 UTC m=+141.102952188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-stats-auth") pod "router-default-5444994796-8zlcn" (UID: "28c60830-7cae-45ed-bbe5-edbb83a24e87") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.825677 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-certs podName:7e38b84d-2101-41b3-b75a-45d06288470e nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.325642113 +0000 UTC m=+141.103023910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-certs") pod "machine-config-server-2rxtm" (UID: "7e38b84d-2101-41b3-b75a-45d06288470e") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.827695 5065 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.827824 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-default-certificate podName:28c60830-7cae-45ed-bbe5-edbb83a24e87 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.327797573 +0000 UTC m=+141.105179330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-default-certificate") pod "router-default-5444994796-8zlcn" (UID: "28c60830-7cae-45ed-bbe5-edbb83a24e87") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829024 5065 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829128 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-node-bootstrap-token podName:7e38b84d-2101-41b3-b75a-45d06288470e nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.329099969 +0000 UTC m=+141.106481776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-node-bootstrap-token") pod "machine-config-server-2rxtm" (UID: "7e38b84d-2101-41b3-b75a-45d06288470e") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829178 5065 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829222 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-metrics-certs podName:28c60830-7cae-45ed-bbe5-edbb83a24e87 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.329207772 +0000 UTC m=+141.106589569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-metrics-certs") pod "router-default-5444994796-8zlcn" (UID: "28c60830-7cae-45ed-bbe5-edbb83a24e87") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.829411 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829667 5065 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829701 5065 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829725 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28c60830-7cae-45ed-bbe5-edbb83a24e87-service-ca-bundle podName:28c60830-7cae-45ed-bbe5-edbb83a24e87 nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.329710147 +0000 UTC m=+141.107091944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/28c60830-7cae-45ed-bbe5-edbb83a24e87-service-ca-bundle") pod "router-default-5444994796-8zlcn" (UID: "28c60830-7cae-45ed-bbe5-edbb83a24e87") : failed to sync configmap cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829764 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-webhook-cert podName:36606460-fa2c-4254-acd5-9de143291cca nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.329746838 +0000 UTC m=+141.107128605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-webhook-cert") pod "packageserver-d55dfcdfc-mzc8r" (UID: "36606460-fa2c-4254-acd5-9de143291cca") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829768 5065 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829863 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-apiservice-cert podName:36606460-fa2c-4254-acd5-9de143291cca nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.32983966 +0000 UTC m=+141.107221417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-apiservice-cert") pod "packageserver-d55dfcdfc-mzc8r" (UID: "36606460-fa2c-4254-acd5-9de143291cca") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829774 5065 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.829982 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de258063-13a0-4a3d-93f4-b39fd81902cb-config podName:de258063-13a0-4a3d-93f4-b39fd81902cb nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.329953473 +0000 UTC m=+141.107335230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/de258063-13a0-4a3d-93f4-b39fd81902cb-config") pod "kube-controller-manager-operator-78b949d7b-tzwhd" (UID: "de258063-13a0-4a3d-93f4-b39fd81902cb") : failed to sync configmap cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.830719 5065 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.830769 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume podName:b719c48b-49ca-4947-8e2f-77523c4360ac nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.330757906 +0000 UTC m=+141.108139663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume") pod "collect-profiles-29332155-547bm" (UID: "b719c48b-49ca-4947-8e2f-77523c4360ac") : failed to sync configmap cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.831551 5065 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: E1008 13:20:38.831609 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de258063-13a0-4a3d-93f4-b39fd81902cb-serving-cert podName:de258063-13a0-4a3d-93f4-b39fd81902cb nodeName:}" failed. No retries permitted until 2025-10-08 13:20:39.331595569 +0000 UTC m=+141.108977326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/de258063-13a0-4a3d-93f4-b39fd81902cb-serving-cert") pod "kube-controller-manager-operator-78b949d7b-tzwhd" (UID: "de258063-13a0-4a3d-93f4-b39fd81902cb") : failed to sync secret cache: timed out waiting for the condition Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.848704 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.868749 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.889173 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.908994 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.928995 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.950355 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.968754 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 08 13:20:38 crc kubenswrapper[5065]: I1008 13:20:38.989004 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.008863 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.029666 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.049204 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.068836 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.088939 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.109155 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.128537 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.149688 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.169311 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.189766 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.208695 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.228728 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.269320 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.289848 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.310147 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.330113 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.350891 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.356741 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-apiservice-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.356846 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-webhook-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.357021 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c60830-7cae-45ed-bbe5-edbb83a24e87-service-ca-bundle\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.357197 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de258063-13a0-4a3d-93f4-b39fd81902cb-config\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.357296 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.357405 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de258063-13a0-4a3d-93f4-b39fd81902cb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.357749 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-stats-auth\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.357809 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-certs\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.358006 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-default-certificate\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.358181 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c60830-7cae-45ed-bbe5-edbb83a24e87-service-ca-bundle\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.358243 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-node-bootstrap-token\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.358301 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-metrics-certs\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.358686 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.358917 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de258063-13a0-4a3d-93f4-b39fd81902cb-config\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.360675 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-apiservice-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.360743 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36606460-fa2c-4254-acd5-9de143291cca-webhook-cert\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.362082 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-default-certificate\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.363142 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de258063-13a0-4a3d-93f4-b39fd81902cb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.363382 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-certs\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.365794 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7e38b84d-2101-41b3-b75a-45d06288470e-node-bootstrap-token\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.367245 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-stats-auth\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.368580 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28c60830-7cae-45ed-bbe5-edbb83a24e87-metrics-certs\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.370127 5065 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.390587 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.409569 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.428966 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.450039 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.496766 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6jm\" (UniqueName: \"kubernetes.io/projected/09664f6d-52dd-48af-b1ad-d19e58094ecc-kube-api-access-bb6jm\") pod \"controller-manager-879f6c89f-fplnt\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.506923 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzw89\" (UniqueName: \"kubernetes.io/projected/d7443ea7-16f6-449c-baea-52a1facd0967-kube-api-access-qzw89\") pod \"console-operator-58897d9998-xpzqc\" (UID: \"d7443ea7-16f6-449c-baea-52a1facd0967\") " pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.534201 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7k5v\" (UniqueName: \"kubernetes.io/projected/c300f213-f82e-4f8c-8402-9e0af05d049c-kube-api-access-q7k5v\") pod \"machine-config-controller-84d6567774-fx6cj\" (UID: \"c300f213-f82e-4f8c-8402-9e0af05d049c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.545203 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8tgv\" (UniqueName: \"kubernetes.io/projected/af14fd36-37c4-43d6-aabc-722f41b42da1-kube-api-access-m8tgv\") pod \"machine-approver-56656f9798-g2w6g\" (UID: \"af14fd36-37c4-43d6-aabc-722f41b42da1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.547634 5065 request.go:700] Waited for 1.823702946s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.560310 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.564931 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtzw\" (UniqueName: \"kubernetes.io/projected/6877d346-fa92-428a-859c-218fdfe5ca4f-kube-api-access-5jtzw\") pod \"console-f9d7485db-w27qr\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.577986 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.588585 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwgz\" (UniqueName: \"kubernetes.io/projected/2f5abc48-dd97-49da-8c32-b388116c092a-kube-api-access-hxwgz\") pod \"openshift-config-operator-7777fb866f-7jrh9\" (UID: \"2f5abc48-dd97-49da-8c32-b388116c092a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.610885 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs29m\" (UniqueName: \"kubernetes.io/projected/cf77c43f-8ce4-40aa-81bf-d2d40068edcc-kube-api-access-qs29m\") pod \"migrator-59844c95c7-kw2gp\" (UID: \"cf77c43f-8ce4-40aa-81bf-d2d40068edcc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.627264 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kqv\" (UniqueName: \"kubernetes.io/projected/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-kube-api-access-62kqv\") pod \"route-controller-manager-6576b87f9c-r4tnh\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.637220 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.650228 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8v8\" (UniqueName: \"kubernetes.io/projected/42132cd2-ec8f-47e2-8011-6f39c454977f-kube-api-access-zd8v8\") pod \"machine-api-operator-5694c8668f-xr8vs\" (UID: \"42132cd2-ec8f-47e2-8011-6f39c454977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.654341 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.666851 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.667314 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42rkq\" (UniqueName: \"kubernetes.io/projected/efd7ad79-f03d-486a-88d8-8be245697463-kube-api-access-42rkq\") pod \"multus-admission-controller-857f4d67dd-zlcv8\" (UID: \"efd7ad79-f03d-486a-88d8-8be245697463\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.705246 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.706136 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.710106 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qlkc\" (UniqueName: \"kubernetes.io/projected/025e6f00-f56b-4674-9cf8-6ddb57afe15f-kube-api-access-7qlkc\") pod \"control-plane-machine-set-operator-78cbb6b69f-c8bkc\" (UID: \"025e6f00-f56b-4674-9cf8-6ddb57afe15f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.726455 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4j2m\" (UniqueName: \"kubernetes.io/projected/61abc989-efa8-41c2-ae46-1c7015e76aee-kube-api-access-w4j2m\") pod \"apiserver-76f77b778f-8t8br\" (UID: \"61abc989-efa8-41c2-ae46-1c7015e76aee\") " pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.730778 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.739380 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.750133 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.764272 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bsgh\" (UniqueName: \"kubernetes.io/projected/07e8eaa7-dc9d-4581-a962-554de51f6137-kube-api-access-5bsgh\") pod \"apiserver-7bbb656c7d-mrbd6\" (UID: \"07e8eaa7-dc9d-4581-a962-554de51f6137\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.770213 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsjc\" (UniqueName: \"kubernetes.io/projected/562d8067-863a-4644-9fd6-f51281a2191b-kube-api-access-8hsjc\") pod \"kube-storage-version-migrator-operator-b67b599dd-5wd6t\" (UID: \"562d8067-863a-4644-9fd6-f51281a2191b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.772150 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq76r\" (UniqueName: \"kubernetes.io/projected/f86b9043-eb02-42e7-b53b-2e684dd2ad26-kube-api-access-nq76r\") pod \"cluster-samples-operator-665b6dd947-87chs\" (UID: \"f86b9043-eb02-42e7-b53b-2e684dd2ad26\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.788044 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7fv\" (UniqueName: \"kubernetes.io/projected/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-kube-api-access-7t7fv\") pod \"oauth-openshift-558db77b4-8gdt7\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.809729 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkzqf\" (UniqueName: \"kubernetes.io/projected/36606460-fa2c-4254-acd5-9de143291cca-kube-api-access-nkzqf\") pod \"packageserver-d55dfcdfc-mzc8r\" (UID: \"36606460-fa2c-4254-acd5-9de143291cca\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.824880 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc56d\" (UniqueName: \"kubernetes.io/projected/7e38b84d-2101-41b3-b75a-45d06288470e-kube-api-access-lc56d\") pod \"machine-config-server-2rxtm\" (UID: \"7e38b84d-2101-41b3-b75a-45d06288470e\") " pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.852102 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgt2h\" (UniqueName: \"kubernetes.io/projected/6ac35ef1-5519-41d1-b2a3-61b03415fbaa-kube-api-access-hgt2h\") pod \"etcd-operator-b45778765-nx54q\" (UID: \"6ac35ef1-5519-41d1-b2a3-61b03415fbaa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.866172 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.867378 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwzt\" (UniqueName: \"kubernetes.io/projected/b4278c38-600b-497f-927d-04791c551470-kube-api-access-prwzt\") pod \"openshift-apiserver-operator-796bbdcf4f-cbgft\" (UID: \"b4278c38-600b-497f-927d-04791c551470\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.890468 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqf6x\" (UniqueName: \"kubernetes.io/projected/feb22448-6135-462d-91a3-66851678143d-kube-api-access-tqf6x\") pod \"marketplace-operator-79b997595-2xx98\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.908251 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5tz7\" (UniqueName: \"kubernetes.io/projected/e227258a-5822-472d-a151-aa7c07951330-kube-api-access-z5tz7\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.909301 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.915592 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fplnt"] Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.922047 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.929222 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2b92\" (UniqueName: \"kubernetes.io/projected/6ded5789-2ec3-42f6-8a56-b575b8fa7dfd-kube-api-access-t2b92\") pod \"machine-config-operator-74547568cd-tkqjf\" (UID: \"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.939334 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9"] Oct 08 13:20:39 crc kubenswrapper[5065]: W1008 13:20:39.940226 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09664f6d_52dd_48af_b1ad_d19e58094ecc.slice/crio-2ecb73939ca27f2cd04627979e05b7f458ec77ffc015f0c25bc890b124baa7e6 WatchSource:0}: Error finding container 2ecb73939ca27f2cd04627979e05b7f458ec77ffc015f0c25bc890b124baa7e6: Status 404 returned error can't find the container with id 2ecb73939ca27f2cd04627979e05b7f458ec77ffc015f0c25bc890b124baa7e6 Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.944169 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwgjp\" (UniqueName: \"kubernetes.io/projected/b923335b-a2b2-4919-909d-70a6d141c798-kube-api-access-xwgjp\") pod \"downloads-7954f5f757-lcdtm\" (UID: \"b923335b-a2b2-4919-909d-70a6d141c798\") " pod="openshift-console/downloads-7954f5f757-lcdtm" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.960031 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.968919 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e227258a-5822-472d-a151-aa7c07951330-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rk2bl\" (UID: \"e227258a-5822-472d-a151-aa7c07951330\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:39 crc kubenswrapper[5065]: W1008 13:20:39.975065 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f5abc48_dd97_49da_8c32_b388116c092a.slice/crio-4863ecb473722748937b1e4be6d87aa60e3d6afee6a25a4f025c7e01c24c4dab WatchSource:0}: Error finding container 4863ecb473722748937b1e4be6d87aa60e3d6afee6a25a4f025c7e01c24c4dab: Status 404 returned error can't find the container with id 4863ecb473722748937b1e4be6d87aa60e3d6afee6a25a4f025c7e01c24c4dab Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.986332 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh"] Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.988593 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de258063-13a0-4a3d-93f4-b39fd81902cb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzwhd\" (UID: \"de258063-13a0-4a3d-93f4-b39fd81902cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:39 crc kubenswrapper[5065]: I1008 13:20:39.988897 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.001093 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2rxtm" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.011139 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2648eeb-e556-43bd-a3de-ace83e540571-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.014366 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.028118 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72mnn\" (UniqueName: \"kubernetes.io/projected/bba0864b-5c2f-42d3-bb43-caaaa1dc4267-kube-api-access-72mnn\") pod \"catalog-operator-68c6474976-bbtng\" (UID: \"bba0864b-5c2f-42d3-bb43-caaaa1dc4267\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.050936 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqts\" (UniqueName: \"kubernetes.io/projected/939719c1-bfcc-469b-a627-627761c67f47-kube-api-access-6jqts\") pod \"olm-operator-6b444d44fb-wkr4f\" (UID: \"939719c1-bfcc-469b-a627-627761c67f47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.059703 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.066463 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w27qr"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.066674 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.077395 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.078205 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzb4\" (UniqueName: \"kubernetes.io/projected/b17ef7c4-a962-4759-9441-33d28b384b4e-kube-api-access-ttzb4\") pod \"dns-operator-744455d44c-2b6kx\" (UID: \"b17ef7c4-a962-4759-9441-33d28b384b4e\") " pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.085508 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbdvz\" (UniqueName: \"kubernetes.io/projected/2ea1c820-2ae9-4b81-874b-3620ffa07f72-kube-api-access-kbdvz\") pod \"authentication-operator-69f744f599-6gx5j\" (UID: \"2ea1c820-2ae9-4b81-874b-3620ffa07f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.094112 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.103674 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.107869 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhd9c\" (UniqueName: \"kubernetes.io/projected/d05b459f-b9a4-425d-936a-60ee9dc5b5f0-kube-api-access-qhd9c\") pod \"service-ca-9c57cc56f-rnnlg\" (UID: \"d05b459f-b9a4-425d-936a-60ee9dc5b5f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.113868 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.123952 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86607750-37d4-45d3-bc51-8633912e77fd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j97lr\" (UID: \"86607750-37d4-45d3-bc51-8633912e77fd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.151915 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7l2\" (UniqueName: \"kubernetes.io/projected/b719c48b-49ca-4947-8e2f-77523c4360ac-kube-api-access-gf7l2\") pod \"collect-profiles-29332155-547bm\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.163454 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.170728 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.175975 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.177187 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxxq9\" (UniqueName: \"kubernetes.io/projected/c2648eeb-e556-43bd-a3de-ace83e540571-kube-api-access-nxxq9\") pod \"cluster-image-registry-operator-dc59b4c8b-gzl88\" (UID: \"c2648eeb-e556-43bd-a3de-ace83e540571\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.185663 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.190586 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.195146 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lcdtm" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.211304 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfcc28cc-4e1d-47a3-89f7-f65d719e320a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd562\" (UID: \"dfcc28cc-4e1d-47a3-89f7-f65d719e320a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.215912 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.220175 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfkr5\" (UniqueName: \"kubernetes.io/projected/b92e0c1d-2733-4e94-9bf2-667b6074ebe0-kube-api-access-gfkr5\") pod \"service-ca-operator-777779d784-s52g7\" (UID: \"b92e0c1d-2733-4e94-9bf2-667b6074ebe0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.236909 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl4hw\" (UniqueName: \"kubernetes.io/projected/696bcbce-29a9-4686-9ac0-e5af4558fc82-kube-api-access-sl4hw\") pod \"openshift-controller-manager-operator-756b6f6bc6-jhhxk\" (UID: \"696bcbce-29a9-4686-9ac0-e5af4558fc82\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.239215 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xpzqc"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.243622 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h62kr\" (UniqueName: \"kubernetes.io/projected/28c60830-7cae-45ed-bbe5-edbb83a24e87-kube-api-access-h62kr\") pod \"router-default-5444994796-8zlcn\" (UID: \"28c60830-7cae-45ed-bbe5-edbb83a24e87\") " pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.254137 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.267817 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.270640 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.271163 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtcj5\" (UniqueName: \"kubernetes.io/projected/6110169d-e524-4d26-a6a5-514ee5384554-kube-api-access-jtcj5\") pod \"package-server-manager-789f6589d5-94t24\" (UID: \"6110169d-e524-4d26-a6a5-514ee5384554\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.277101 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.308548 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zlcv8"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.311911 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385128 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-trusted-ca\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385185 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e2d2016-716c-4261-a1c0-5dbd804a65d8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385210 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e2d2016-716c-4261-a1c0-5dbd804a65d8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385239 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385256 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-tls\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385281 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-bound-sa-token\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385329 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-certificates\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.385346 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d66pg\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-kube-api-access-d66pg\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: E1008 13:20:40.385722 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:40.885710251 +0000 UTC m=+142.663092008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.400438 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.407230 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.424954 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.446713 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.486510 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.486749 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-trusted-ca\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.486874 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e2d2016-716c-4261-a1c0-5dbd804a65d8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.486966 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e2d2016-716c-4261-a1c0-5dbd804a65d8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487070 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-tls\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487097 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acf9352c-2b0b-449c-a248-78a349654f65-cert\") pod \"ingress-canary-h8cnh\" (UID: \"acf9352c-2b0b-449c-a248-78a349654f65\") " pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487131 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwks\" (UniqueName: \"kubernetes.io/projected/acf9352c-2b0b-449c-a248-78a349654f65-kube-api-access-6jwks\") pod \"ingress-canary-h8cnh\" (UID: \"acf9352c-2b0b-449c-a248-78a349654f65\") " pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487177 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-csi-data-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487194 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2mk\" (UniqueName: \"kubernetes.io/projected/840c1621-59f2-44e2-b60a-63f7dd6a82dd-kube-api-access-fs2mk\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487373 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-bound-sa-token\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487504 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-plugins-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487527 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/840c1621-59f2-44e2-b60a-63f7dd6a82dd-metrics-tls\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487565 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-mountpoint-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487762 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7klv\" (UniqueName: \"kubernetes.io/projected/5d729eca-d89f-4f52-96c1-6b8da29e9678-kube-api-access-v7klv\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487805 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-certificates\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487870 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d66pg\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-kube-api-access-d66pg\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487934 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-socket-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.487958 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840c1621-59f2-44e2-b60a-63f7dd6a82dd-config-volume\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.488036 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-registration-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: E1008 13:20:40.488313 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:40.988299475 +0000 UTC m=+142.765681232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.490386 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-trusted-ca\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.498927 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e2d2016-716c-4261-a1c0-5dbd804a65d8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.501135 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-certificates\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.501360 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.501860 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e2d2016-716c-4261-a1c0-5dbd804a65d8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.502689 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8t8br"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.505137 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-tls\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.505940 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xr8vs"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.509211 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx54q"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.521989 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.554719 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w27qr" event={"ID":"6877d346-fa92-428a-859c-218fdfe5ca4f","Type":"ContainerStarted","Data":"c0123c109f7806ddf380e5e31cf7fd666e0d5bb11eea5fedfeadc7ec1db7b85c"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.567632 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d66pg\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-kube-api-access-d66pg\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.573822 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-bound-sa-token\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589471 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-socket-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589521 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840c1621-59f2-44e2-b60a-63f7dd6a82dd-config-volume\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589571 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-registration-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589647 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589680 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acf9352c-2b0b-449c-a248-78a349654f65-cert\") pod \"ingress-canary-h8cnh\" (UID: \"acf9352c-2b0b-449c-a248-78a349654f65\") " pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589706 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwks\" (UniqueName: \"kubernetes.io/projected/acf9352c-2b0b-449c-a248-78a349654f65-kube-api-access-6jwks\") pod \"ingress-canary-h8cnh\" (UID: \"acf9352c-2b0b-449c-a248-78a349654f65\") " pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589728 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-csi-data-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589751 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs2mk\" (UniqueName: \"kubernetes.io/projected/840c1621-59f2-44e2-b60a-63f7dd6a82dd-kube-api-access-fs2mk\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589802 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/840c1621-59f2-44e2-b60a-63f7dd6a82dd-metrics-tls\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589826 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-plugins-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589863 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-mountpoint-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.589907 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7klv\" (UniqueName: \"kubernetes.io/projected/5d729eca-d89f-4f52-96c1-6b8da29e9678-kube-api-access-v7klv\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.590657 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-csi-data-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.590932 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-socket-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.591043 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-registration-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: E1008 13:20:40.591173 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.091161156 +0000 UTC m=+142.868542913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.591683 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840c1621-59f2-44e2-b60a-63f7dd6a82dd-config-volume\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.591743 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-plugins-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.591779 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5d729eca-d89f-4f52-96c1-6b8da29e9678-mountpoint-dir\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.606996 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acf9352c-2b0b-449c-a248-78a349654f65-cert\") pod \"ingress-canary-h8cnh\" (UID: \"acf9352c-2b0b-449c-a248-78a349654f65\") " pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.608863 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/840c1621-59f2-44e2-b60a-63f7dd6a82dd-metrics-tls\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.627364 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" event={"ID":"cf77c43f-8ce4-40aa-81bf-d2d40068edcc","Type":"ContainerStarted","Data":"f61408d23d88e8511b7a789ad8c1f048490afe899edfa956e5e80a87f34651c1"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.634017 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" event={"ID":"d7443ea7-16f6-449c-baea-52a1facd0967","Type":"ContainerStarted","Data":"43655fad64dccd740738a712f37c1dbf5125f03386e795f2ad4170ac5704fb9f"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.636189 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7klv\" (UniqueName: \"kubernetes.io/projected/5d729eca-d89f-4f52-96c1-6b8da29e9678-kube-api-access-v7klv\") pod \"csi-hostpathplugin-6sglj\" (UID: \"5d729eca-d89f-4f52-96c1-6b8da29e9678\") " pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.644201 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.645600 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" event={"ID":"efd7ad79-f03d-486a-88d8-8be245697463","Type":"ContainerStarted","Data":"7e98b642edc8e0e0c2d95a7d41745005c02be5178d4d8dae6d9ccb758f56c4d8"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.654332 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwks\" (UniqueName: \"kubernetes.io/projected/acf9352c-2b0b-449c-a248-78a349654f65-kube-api-access-6jwks\") pod \"ingress-canary-h8cnh\" (UID: \"acf9352c-2b0b-449c-a248-78a349654f65\") " pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.658979 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" event={"ID":"09664f6d-52dd-48af-b1ad-d19e58094ecc","Type":"ContainerStarted","Data":"e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.659023 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" event={"ID":"09664f6d-52dd-48af-b1ad-d19e58094ecc","Type":"ContainerStarted","Data":"2ecb73939ca27f2cd04627979e05b7f458ec77ffc015f0c25bc890b124baa7e6"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.660124 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.661102 5065 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fplnt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.661138 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" podUID="09664f6d-52dd-48af-b1ad-d19e58094ecc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.666240 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2rxtm" event={"ID":"7e38b84d-2101-41b3-b75a-45d06288470e","Type":"ContainerStarted","Data":"65adb7ff18835254ba1697464059f36bbc5a62520de917b82cd45d1e83d59520"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.679463 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" event={"ID":"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3","Type":"ContainerStarted","Data":"135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.679506 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" event={"ID":"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3","Type":"ContainerStarted","Data":"fafbd122a7e3a2b640d062801f79abb7b623ece880f5c36f41fba8b2ae8f1606"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.680184 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.684067 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs2mk\" (UniqueName: \"kubernetes.io/projected/840c1621-59f2-44e2-b60a-63f7dd6a82dd-kube-api-access-fs2mk\") pod \"dns-default-g75m7\" (UID: \"840c1621-59f2-44e2-b60a-63f7dd6a82dd\") " pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.687498 5065 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r4tnh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.687546 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" podUID="dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.691462 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:40 crc kubenswrapper[5065]: E1008 13:20:40.691849 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.191830386 +0000 UTC m=+142.969212143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.698396 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.701366 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" event={"ID":"af14fd36-37c4-43d6-aabc-722f41b42da1","Type":"ContainerStarted","Data":"99f3a8fb88b81485dbe23e62517188424e5a7c3cbff820be3a3596d7c2ccf05d"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.701400 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" event={"ID":"af14fd36-37c4-43d6-aabc-722f41b42da1","Type":"ContainerStarted","Data":"4cc5a65ef7e7465056108bbed082c206d1828d340193d629da30d27cab02d3ed"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.702701 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" event={"ID":"c300f213-f82e-4f8c-8402-9e0af05d049c","Type":"ContainerStarted","Data":"cdabe4511279197ac8a6566368b2a0f3a7415c9a2bdfbecad072cb2e384e9ca0"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.707661 5065 generic.go:334] "Generic (PLEG): container finished" podID="2f5abc48-dd97-49da-8c32-b388116c092a" containerID="cd3ef431ed6f679758b02c37e96e509b0bcf2690b5c0e44618e15078ec5ea5ad" exitCode=0 Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.707714 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" event={"ID":"2f5abc48-dd97-49da-8c32-b388116c092a","Type":"ContainerDied","Data":"cd3ef431ed6f679758b02c37e96e509b0bcf2690b5c0e44618e15078ec5ea5ad"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.707746 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" event={"ID":"2f5abc48-dd97-49da-8c32-b388116c092a","Type":"ContainerStarted","Data":"4863ecb473722748937b1e4be6d87aa60e3d6afee6a25a4f025c7e01c24c4dab"} Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.835356 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.838173 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: E1008 13:20:40.839656 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.339635252 +0000 UTC m=+143.117017089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.861855 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.913287 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h8cnh" Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.928606 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8gdt7"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.928760 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.928922 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.929006 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t"] Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.946281 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:40 crc kubenswrapper[5065]: E1008 13:20:40.946486 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.446463844 +0000 UTC m=+143.223845601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.946648 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:40 crc kubenswrapper[5065]: E1008 13:20:40.946946 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.446933757 +0000 UTC m=+143.224315514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:40 crc kubenswrapper[5065]: I1008 13:20:40.947953 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.047828 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.048058 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.548035519 +0000 UTC m=+143.325417276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.048179 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.048508 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.548500562 +0000 UTC m=+143.325882319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.148849 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.149051 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.649024439 +0000 UTC m=+143.426406206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.150044 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.150523 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.650485709 +0000 UTC m=+143.427867466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.253232 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.253811 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.753786283 +0000 UTC m=+143.531168050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.354586 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.355000 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.854980398 +0000 UTC m=+143.632362195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.462090 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.462683 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.962661453 +0000 UTC m=+143.740043220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.463135 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.463678 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:41.963663911 +0000 UTC m=+143.741045668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.494944 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2b6kx"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.512172 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.515401 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.521647 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xx98"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.533501 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lcdtm"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.558578 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6gx5j"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.565065 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.565503 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.065482584 +0000 UTC m=+143.842864341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.674659 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.675864 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.175846894 +0000 UTC m=+143.953228651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.771484 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" event={"ID":"efd7ad79-f03d-486a-88d8-8be245697463","Type":"ContainerStarted","Data":"23f3a21be88128523e12134987d865cabf010efaf4b99052760f454a749d3473"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.779743 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.784104 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f"] Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.779808 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.279792705 +0000 UTC m=+144.057174452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.785139 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.785552 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.285537195 +0000 UTC m=+144.062918952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.803886 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2rxtm" event={"ID":"7e38b84d-2101-41b3-b75a-45d06288470e","Type":"ContainerStarted","Data":"c499bcc5c79e54f85fd3aa2c09ee5512d4ed6b24c341687ccc49e35a683d2fd1"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.886091 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.887252 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.387236364 +0000 UTC m=+144.164618121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.923083 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w27qr" event={"ID":"6877d346-fa92-428a-859c-218fdfe5ca4f","Type":"ContainerStarted","Data":"9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698"} Oct 08 13:20:41 crc kubenswrapper[5065]: W1008 13:20:41.937751 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod939719c1_bfcc_469b_a627_627761c67f47.slice/crio-9e565c9407a0c64fbdebe20e46d99070fde2999f1c33880eb91b4699b06a34e0 WatchSource:0}: Error finding container 9e565c9407a0c64fbdebe20e46d99070fde2999f1c33880eb91b4699b06a34e0: Status 404 returned error can't find the container with id 9e565c9407a0c64fbdebe20e46d99070fde2999f1c33880eb91b4699b06a34e0 Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.938825 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" event={"ID":"2ea1c820-2ae9-4b81-874b-3620ffa07f72","Type":"ContainerStarted","Data":"6e4941d9ca0671c17633ed436c79b17e4b7bcd886c6605527d8dc807aa618789"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.946079 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" event={"ID":"36606460-fa2c-4254-acd5-9de143291cca","Type":"ContainerStarted","Data":"31fb1617d8a135ce71d175d9322ce4a6221b4268b14ae679e122af04283e7008"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.946107 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" event={"ID":"36606460-fa2c-4254-acd5-9de143291cca","Type":"ContainerStarted","Data":"ec5ab0604c77910035ad812d0936c4658be93c8593e18dc0f561bda08e64d43d"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.946911 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.956083 5065 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mzc8r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.956126 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" podUID="36606460-fa2c-4254-acd5-9de143291cca" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.956428 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" event={"ID":"025e6f00-f56b-4674-9cf8-6ddb57afe15f","Type":"ContainerStarted","Data":"c0a082e49bc275d3e835cf4daa2e147f4a51d29d23b86664356eb2bbbabb8e75"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.956452 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" event={"ID":"025e6f00-f56b-4674-9cf8-6ddb57afe15f","Type":"ContainerStarted","Data":"1e9136d20eabf2d5436d6756cc22505f18dd3ff8cdc6e9119ffeddffb3ff4599"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.964717 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" event={"ID":"42132cd2-ec8f-47e2-8011-6f39c454977f","Type":"ContainerStarted","Data":"541f8881cd3573f69b5d26ebe5dd5f36ce63a4f31852489297177bd68d0cbd7d"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.964744 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" event={"ID":"42132cd2-ec8f-47e2-8011-6f39c454977f","Type":"ContainerStarted","Data":"b66f831c758e744b70304d09709fad3e35013827ca4ecd11f6ca1be773dd519a"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.974255 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8zlcn" event={"ID":"28c60830-7cae-45ed-bbe5-edbb83a24e87","Type":"ContainerStarted","Data":"7dcabae84c093cb6fb901f687b9cbfcae4aa9d629031986798a432e1cc5aced1"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.974299 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8zlcn" event={"ID":"28c60830-7cae-45ed-bbe5-edbb83a24e87","Type":"ContainerStarted","Data":"1a0a31e709c574de80e76e7aa0fc93f666e0fe022c84c1132278b11b8d133070"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.974787 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s52g7"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.977567 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.979567 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rnnlg"] Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.980768 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" event={"ID":"c300f213-f82e-4f8c-8402-9e0af05d049c","Type":"ContainerStarted","Data":"13d923b64ec8377ee7881cde3544067b5acb4275f310c7f5d2dede97f32c8b43"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.986923 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" event={"ID":"07e8eaa7-dc9d-4581-a962-554de51f6137","Type":"ContainerStarted","Data":"fcfe1e8d7acd0331cdc6d2f4323dac23c5e0ab32f145f7ca870577bdf474f8c2"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.987274 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:41 crc kubenswrapper[5065]: E1008 13:20:41.989039 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.488599164 +0000 UTC m=+144.265980921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.994573 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" event={"ID":"562d8067-863a-4644-9fd6-f51281a2191b","Type":"ContainerStarted","Data":"662182278d52093ca2af1bcee97abf4072f74e812c4bf7a43702aaf32391d03b"} Oct 08 13:20:41 crc kubenswrapper[5065]: I1008 13:20:41.994607 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" event={"ID":"562d8067-863a-4644-9fd6-f51281a2191b","Type":"ContainerStarted","Data":"ca00e319c17260847ec94279a8c045c991ebf2732f7b5c4c8ffb5b3c1f2c23a1"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.012236 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" event={"ID":"bba0864b-5c2f-42d3-bb43-caaaa1dc4267","Type":"ContainerStarted","Data":"d768bb58400cb4ffdb3f62b20b556806aae3e0ce69422ae7ec01168fcb98491a"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.024219 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" event={"ID":"cf77c43f-8ce4-40aa-81bf-d2d40068edcc","Type":"ContainerStarted","Data":"adb0b6981ab606cf94844fb5d3aed6d2b350521da07439de4fc8576025637579"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.026156 5065 generic.go:334] "Generic (PLEG): container finished" podID="61abc989-efa8-41c2-ae46-1c7015e76aee" containerID="9aad2885f45ec8b3612d8b328b939a7275e73a7fa553359fb2656e3e87e165e6" exitCode=0 Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.026191 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" event={"ID":"61abc989-efa8-41c2-ae46-1c7015e76aee","Type":"ContainerDied","Data":"9aad2885f45ec8b3612d8b328b939a7275e73a7fa553359fb2656e3e87e165e6"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.026205 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" event={"ID":"61abc989-efa8-41c2-ae46-1c7015e76aee","Type":"ContainerStarted","Data":"23e2d244af729a851705123efa731a5f2ced47dcecb0e18e2e0fec8c0804a032"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.035362 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" event={"ID":"6ac35ef1-5519-41d1-b2a3-61b03415fbaa","Type":"ContainerStarted","Data":"629b2a748011db19784ebe68beeb78a401cd7a43bb005f0227bac2f22325f880"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.046574 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" event={"ID":"f86b9043-eb02-42e7-b53b-2e684dd2ad26","Type":"ContainerStarted","Data":"cd69cf1f0fe661b217c33f772792580bde50f91bc3ff513666edb809e25487ea"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.055836 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" event={"ID":"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57","Type":"ContainerStarted","Data":"de50af3204eaffa548b6b17fcd7317006763bc3c29c5317eaed9a4bc03dd6505"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.063844 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" podStartSLOduration=125.063804913 podStartE2EDuration="2m5.063804913s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.061228831 +0000 UTC m=+143.838610598" watchObservedRunningTime="2025-10-08 13:20:42.063804913 +0000 UTC m=+143.841186670" Oct 08 13:20:42 crc kubenswrapper[5065]: W1008 13:20:42.072882 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb92e0c1d_2733_4e94_9bf2_667b6074ebe0.slice/crio-c0db7f1418d52bf54fb8ecf533ef332a2f084b365d41efd19c803a3f6958c1c4 WatchSource:0}: Error finding container c0db7f1418d52bf54fb8ecf533ef332a2f084b365d41efd19c803a3f6958c1c4: Status 404 returned error can't find the container with id c0db7f1418d52bf54fb8ecf533ef332a2f084b365d41efd19c803a3f6958c1c4 Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.073071 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" event={"ID":"d7443ea7-16f6-449c-baea-52a1facd0967","Type":"ContainerStarted","Data":"812d5bf5e3f72dd70ad4e0e84b8746b1154d76af68e1fbb0972f771bc190b381"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.073887 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.098336 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.104313 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.600192479 +0000 UTC m=+144.377574236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.104914 5065 patch_prober.go:28] interesting pod/console-operator-58897d9998-xpzqc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.105013 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" podUID="d7443ea7-16f6-449c-baea-52a1facd0967" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/readyz\": dial tcp 10.217.0.32:8443: connect: connection refused" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.112719 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" event={"ID":"86607750-37d4-45d3-bc51-8633912e77fd","Type":"ContainerStarted","Data":"a748eb73dec652eae5a854d82aa8951c33be08076dbf0a93482099332d4ede17"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.128175 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lcdtm" event={"ID":"b923335b-a2b2-4919-909d-70a6d141c798","Type":"ContainerStarted","Data":"728e4cc16336db62fb77f98139975440ce7c1c9361d34bc15b2d91aca31ad095"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.130622 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" event={"ID":"feb22448-6135-462d-91a3-66851678143d","Type":"ContainerStarted","Data":"50b7dc40260dfc3341b497dd06cf6e8344a72b2cb8393f58abe0041f2debeae8"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.131994 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" event={"ID":"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd","Type":"ContainerStarted","Data":"98339c4ae6e60802f7674f64fe9b9490588f017e88718fb1350362702d1b5464"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.147146 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" event={"ID":"b4278c38-600b-497f-927d-04791c551470","Type":"ContainerStarted","Data":"6d40339819347a462287aba7cbecae5360c6b49dc0fea32c743fbdaf85ff0e27"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.151967 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.161929 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" podStartSLOduration=124.161911962 podStartE2EDuration="2m4.161911962s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.160881633 +0000 UTC m=+143.938263390" watchObservedRunningTime="2025-10-08 13:20:42.161911962 +0000 UTC m=+143.939293719" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.167842 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" event={"ID":"af14fd36-37c4-43d6-aabc-722f41b42da1","Type":"ContainerStarted","Data":"0daf2e38389e448955450d42f71b55116c811c1672488030a41e229dd9a7bda4"} Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.177149 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.179993 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.184852 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.185744 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2rxtm" podStartSLOduration=5.185734926 podStartE2EDuration="5.185734926s" podCreationTimestamp="2025-10-08 13:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.185451729 +0000 UTC m=+143.962833486" watchObservedRunningTime="2025-10-08 13:20:42.185734926 +0000 UTC m=+143.963116673" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.198664 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.204275 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.204818 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.206219 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.706204548 +0000 UTC m=+144.483586295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.242450 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c8bkc" podStartSLOduration=124.242409419 podStartE2EDuration="2m4.242409419s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.23100469 +0000 UTC m=+144.008386447" watchObservedRunningTime="2025-10-08 13:20:42.242409419 +0000 UTC m=+144.019791186" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.265549 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6sglj"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.271768 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.277481 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.279212 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" podStartSLOduration=124.279201076 podStartE2EDuration="2m4.279201076s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.259359172 +0000 UTC m=+144.036740929" watchObservedRunningTime="2025-10-08 13:20:42.279201076 +0000 UTC m=+144.056582833" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.280103 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.280241 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.280284 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.293188 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g75m7"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.310036 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.310249 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.810215511 +0000 UTC m=+144.587597268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.310436 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.310969 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.810959022 +0000 UTC m=+144.588340779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.357429 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h8cnh"] Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.370474 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5wd6t" podStartSLOduration=124.370454923 podStartE2EDuration="2m4.370454923s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.352871762 +0000 UTC m=+144.130253519" watchObservedRunningTime="2025-10-08 13:20:42.370454923 +0000 UTC m=+144.147836700" Oct 08 13:20:42 crc kubenswrapper[5065]: W1008 13:20:42.383711 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840c1621_59f2_44e2_b60a_63f7dd6a82dd.slice/crio-0a3635c76adba70f5d30618d51b6312dfd48941c78db33a5024af19c63732fe2 WatchSource:0}: Error finding container 0a3635c76adba70f5d30618d51b6312dfd48941c78db33a5024af19c63732fe2: Status 404 returned error can't find the container with id 0a3635c76adba70f5d30618d51b6312dfd48941c78db33a5024af19c63732fe2 Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.412285 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.412816 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:42.912795865 +0000 UTC m=+144.690177622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: W1008 13:20:42.419495 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696bcbce_29a9_4686_9ac0_e5af4558fc82.slice/crio-1d3fc6af895973bdd6847c37c56bde8cead690f2c1c54937b2cf315323131b9f WatchSource:0}: Error finding container 1d3fc6af895973bdd6847c37c56bde8cead690f2c1c54937b2cf315323131b9f: Status 404 returned error can't find the container with id 1d3fc6af895973bdd6847c37c56bde8cead690f2c1c54937b2cf315323131b9f Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.426687 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-w27qr" podStartSLOduration=125.426673572 podStartE2EDuration="2m5.426673572s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.426367814 +0000 UTC m=+144.203749581" watchObservedRunningTime="2025-10-08 13:20:42.426673572 +0000 UTC m=+144.204055329" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.428321 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8zlcn" podStartSLOduration=125.428317018 podStartE2EDuration="2m5.428317018s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.395547313 +0000 UTC m=+144.172929090" watchObservedRunningTime="2025-10-08 13:20:42.428317018 +0000 UTC m=+144.205698775" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.513542 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.514279 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.014264198 +0000 UTC m=+144.791645955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.561281 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" podStartSLOduration=124.561257259 podStartE2EDuration="2m4.561257259s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.560238161 +0000 UTC m=+144.337619918" watchObservedRunningTime="2025-10-08 13:20:42.561257259 +0000 UTC m=+144.338639016" Oct 08 13:20:42 crc kubenswrapper[5065]: W1008 13:20:42.568659 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d729eca_d89f_4f52_96c1_6b8da29e9678.slice/crio-925e6423fa5a8fdf45444748510d329959d2de2f6cbf2ee5129b9216fa215e4f WatchSource:0}: Error finding container 925e6423fa5a8fdf45444748510d329959d2de2f6cbf2ee5129b9216fa215e4f: Status 404 returned error can't find the container with id 925e6423fa5a8fdf45444748510d329959d2de2f6cbf2ee5129b9216fa215e4f Oct 08 13:20:42 crc kubenswrapper[5065]: W1008 13:20:42.589540 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacf9352c_2b0b_449c_a248_78a349654f65.slice/crio-10e40074da2ee84d9950cb7b4a508df6c15422ab6344b9f15fc3a6f816985056 WatchSource:0}: Error finding container 10e40074da2ee84d9950cb7b4a508df6c15422ab6344b9f15fc3a6f816985056: Status 404 returned error can't find the container with id 10e40074da2ee84d9950cb7b4a508df6c15422ab6344b9f15fc3a6f816985056 Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.612471 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g2w6g" podStartSLOduration=125.612453099 podStartE2EDuration="2m5.612453099s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.578239563 +0000 UTC m=+144.355621310" watchObservedRunningTime="2025-10-08 13:20:42.612453099 +0000 UTC m=+144.389834856" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.614598 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.614928 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.114916227 +0000 UTC m=+144.892297984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.614957 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.615206 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.115199515 +0000 UTC m=+144.892581272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.647077 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" podStartSLOduration=125.647062275 podStartE2EDuration="2m5.647062275s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:42.635915543 +0000 UTC m=+144.413297300" watchObservedRunningTime="2025-10-08 13:20:42.647062275 +0000 UTC m=+144.424444032" Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.715391 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.715819 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.215785503 +0000 UTC m=+144.993167270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.716027 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.716377 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.216365709 +0000 UTC m=+144.993747466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.816901 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.817201 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.317187493 +0000 UTC m=+145.094569250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:42 crc kubenswrapper[5065]: I1008 13:20:42.920405 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:42 crc kubenswrapper[5065]: E1008 13:20:42.921019 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.421007602 +0000 UTC m=+145.198389359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.021621 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.022052 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.522033992 +0000 UTC m=+145.299415749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.123958 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.124764 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.624748019 +0000 UTC m=+145.402129776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.225404 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.225671 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.725644965 +0000 UTC m=+145.503026722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.225734 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.226250 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.726243212 +0000 UTC m=+145.503624969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.231682 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cbgft" event={"ID":"b4278c38-600b-497f-927d-04791c551470","Type":"ContainerStarted","Data":"c529e994ac4bce7c0024577618e9fa6a2a9ee1104599e146fa0f141e233b10e4"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.258934 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" event={"ID":"b92e0c1d-2733-4e94-9bf2-667b6074ebe0","Type":"ContainerStarted","Data":"c0db7f1418d52bf54fb8ecf533ef332a2f084b365d41efd19c803a3f6958c1c4"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.266786 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" event={"ID":"b17ef7c4-a962-4759-9441-33d28b384b4e","Type":"ContainerStarted","Data":"4248549e008a7620658b438398587b778c18b4c3c9e6de768768ec824f083cd0"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.266842 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" event={"ID":"b17ef7c4-a962-4759-9441-33d28b384b4e","Type":"ContainerStarted","Data":"767d6f19001716644074a2a8cdab47096d4e1be501575a517889bfb7645c68c8"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.280967 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" event={"ID":"d05b459f-b9a4-425d-936a-60ee9dc5b5f0","Type":"ContainerStarted","Data":"d391285b084518c01f78067ba9fc51219c662a47271e6b57a1599319005d03ee"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.281020 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" event={"ID":"d05b459f-b9a4-425d-936a-60ee9dc5b5f0","Type":"ContainerStarted","Data":"9d4fcd294a255771daa034f5067743168065e1338431b96e44d7f9e1446354b5"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.296702 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" podStartSLOduration=125.296684108 podStartE2EDuration="2m5.296684108s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.288121749 +0000 UTC m=+145.065503506" watchObservedRunningTime="2025-10-08 13:20:43.296684108 +0000 UTC m=+145.074065865" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.297502 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" event={"ID":"c2648eeb-e556-43bd-a3de-ace83e540571","Type":"ContainerStarted","Data":"5d6747c03e298b3379e114f4cabf517d406989be357d9baa725a763e4a9ef958"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.307773 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:43 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:43 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:43 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.307847 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.328457 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.328750 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-rnnlg" podStartSLOduration=125.328480326 podStartE2EDuration="2m5.328480326s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.327503279 +0000 UTC m=+145.104885056" watchObservedRunningTime="2025-10-08 13:20:43.328480326 +0000 UTC m=+145.105862083" Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.329487 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.829472464 +0000 UTC m=+145.606854221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.331382 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" event={"ID":"c300f213-f82e-4f8c-8402-9e0af05d049c","Type":"ContainerStarted","Data":"14996866cbebe14669776ea317edfcfe64404bfe0fa5fbb753eb67870787bbeb"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.347204 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" event={"ID":"f86b9043-eb02-42e7-b53b-2e684dd2ad26","Type":"ContainerStarted","Data":"e4e0db8f84deef38124fee73af13abd1d3ef51e2daad7889e3494f2630ebf9bf"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.347249 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" event={"ID":"f86b9043-eb02-42e7-b53b-2e684dd2ad26","Type":"ContainerStarted","Data":"01c84038598c4492b134d08dc610912f4c03f43bceb950b0642c844e830e9eab"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.357531 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fx6cj" podStartSLOduration=125.357513066 podStartE2EDuration="2m5.357513066s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.357057044 +0000 UTC m=+145.134438801" watchObservedRunningTime="2025-10-08 13:20:43.357513066 +0000 UTC m=+145.134894843" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.382561 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" event={"ID":"696bcbce-29a9-4686-9ac0-e5af4558fc82","Type":"ContainerStarted","Data":"1d3fc6af895973bdd6847c37c56bde8cead690f2c1c54937b2cf315323131b9f"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.401069 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" event={"ID":"6110169d-e524-4d26-a6a5-514ee5384554","Type":"ContainerStarted","Data":"8f5bdf6f5cfcbc62a2cacd2c9b5c6b145b079258d98253bf8d9b3ae0e1d71096"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.428330 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-87chs" podStartSLOduration=125.428313783 podStartE2EDuration="2m5.428313783s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.427001496 +0000 UTC m=+145.204383253" watchObservedRunningTime="2025-10-08 13:20:43.428313783 +0000 UTC m=+145.205695540" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.429871 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.430120 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:43.930107793 +0000 UTC m=+145.707489550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.443210 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" event={"ID":"5d729eca-d89f-4f52-96c1-6b8da29e9678","Type":"ContainerStarted","Data":"925e6423fa5a8fdf45444748510d329959d2de2f6cbf2ee5129b9216fa215e4f"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.447941 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" event={"ID":"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57","Type":"ContainerStarted","Data":"faeae51c893a92143f092f27a0e80a2a18995f279383180fcf24ad8e93dfddc6"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.448571 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.473049 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" event={"ID":"939719c1-bfcc-469b-a627-627761c67f47","Type":"ContainerStarted","Data":"6655ae01b4597f6252814b02f818d706da1a559b5c70bfc1b315a0a88a1ce0b5"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.473100 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" event={"ID":"939719c1-bfcc-469b-a627-627761c67f47","Type":"ContainerStarted","Data":"9e565c9407a0c64fbdebe20e46d99070fde2999f1c33880eb91b4699b06a34e0"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.473672 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.475275 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" event={"ID":"86607750-37d4-45d3-bc51-8633912e77fd","Type":"ContainerStarted","Data":"30c59c0555d5da99117318d870131b37f405846266bf45d1e9539426f772fdac"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.488105 5065 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8gdt7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.488168 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" podUID="e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.488408 5065 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wkr4f container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.488448 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" podUID="939719c1-bfcc-469b-a627-627761c67f47" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.493363 5065 generic.go:334] "Generic (PLEG): container finished" podID="07e8eaa7-dc9d-4581-a962-554de51f6137" containerID="6791f2a74eef17cd7b69e404ab2bec774d396b09fb09ac554e18ece0ae3b4417" exitCode=0 Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.494369 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" event={"ID":"07e8eaa7-dc9d-4581-a962-554de51f6137","Type":"ContainerDied","Data":"6791f2a74eef17cd7b69e404ab2bec774d396b09fb09ac554e18ece0ae3b4417"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.508130 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" podStartSLOduration=126.5081118 podStartE2EDuration="2m6.5081118s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.507051911 +0000 UTC m=+145.284433668" watchObservedRunningTime="2025-10-08 13:20:43.5081118 +0000 UTC m=+145.285493557" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.512598 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" event={"ID":"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd","Type":"ContainerStarted","Data":"5ff7c000da06244e0ab50bf55f28dcc829c18459221af2627a88560d02cfab42"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.520650 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g75m7" event={"ID":"840c1621-59f2-44e2-b60a-63f7dd6a82dd","Type":"ContainerStarted","Data":"0a3635c76adba70f5d30618d51b6312dfd48941c78db33a5024af19c63732fe2"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.534533 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.535710 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.03569554 +0000 UTC m=+145.813077297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.544045 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lcdtm" event={"ID":"b923335b-a2b2-4919-909d-70a6d141c798","Type":"ContainerStarted","Data":"0ab5526e10ad4225984ccc48db3e76f04416afab262f170c6bdf43bb05fd657d"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.544806 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lcdtm" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.548664 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j97lr" podStartSLOduration=126.548648872 podStartE2EDuration="2m6.548648872s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.548278391 +0000 UTC m=+145.325660158" watchObservedRunningTime="2025-10-08 13:20:43.548648872 +0000 UTC m=+145.326030629" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.570292 5065 patch_prober.go:28] interesting pod/downloads-7954f5f757-lcdtm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.570361 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lcdtm" podUID="b923335b-a2b2-4919-909d-70a6d141c798" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.590026 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" event={"ID":"cf77c43f-8ce4-40aa-81bf-d2d40068edcc","Type":"ContainerStarted","Data":"5f5598e6538322c6842231de23c0fbaff93c5aff9f72e68df2fcf2b14e26730a"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.597529 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" event={"ID":"2ea1c820-2ae9-4b81-874b-3620ffa07f72","Type":"ContainerStarted","Data":"7cb4063f865a1ee6fa22b580d9187482841e91fe5c44a2f7867ab40916e5d70e"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.626702 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" event={"ID":"de258063-13a0-4a3d-93f4-b39fd81902cb","Type":"ContainerStarted","Data":"032014542e565e0f01471c9ede8fbe83166d3d4174ae9cd64ada73e918d01097"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.636675 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.638133 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.138117579 +0000 UTC m=+145.915499336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.642999 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" event={"ID":"2f5abc48-dd97-49da-8c32-b388116c092a","Type":"ContainerStarted","Data":"1afbd52d2980837f90cba266f91a3b20fb2149dba13bc8e887379c5ec385a28c"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.644376 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.675846 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6gx5j" podStartSLOduration=125.675829132 podStartE2EDuration="2m5.675829132s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.674579387 +0000 UTC m=+145.451961144" watchObservedRunningTime="2025-10-08 13:20:43.675829132 +0000 UTC m=+145.453210889" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.677565 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" podStartSLOduration=125.67755796 podStartE2EDuration="2m5.67755796s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.646190455 +0000 UTC m=+145.423572212" watchObservedRunningTime="2025-10-08 13:20:43.67755796 +0000 UTC m=+145.454939717" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.713189 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-lcdtm" podStartSLOduration=126.713166904 podStartE2EDuration="2m6.713166904s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.712199717 +0000 UTC m=+145.489581474" watchObservedRunningTime="2025-10-08 13:20:43.713166904 +0000 UTC m=+145.490548671" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.722438 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" event={"ID":"feb22448-6135-462d-91a3-66851678143d","Type":"ContainerStarted","Data":"84ea7bfb55712596c27c98dfea0bb9525b4de6f8f1df17fa71733af1a0dbc7c0"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.722983 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.741781 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.743900 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.243882351 +0000 UTC m=+146.021264098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.747127 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" event={"ID":"b719c48b-49ca-4947-8e2f-77523c4360ac","Type":"ContainerStarted","Data":"a3909a5740342a1b4b0a8e24e95845d1243d25c8757eba2a536068cb2d571ccd"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.761383 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" podStartSLOduration=126.761347259 podStartE2EDuration="2m6.761347259s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.756872514 +0000 UTC m=+145.534254281" watchObservedRunningTime="2025-10-08 13:20:43.761347259 +0000 UTC m=+145.538729016" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.779661 5065 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2xx98 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.779727 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" podUID="feb22448-6135-462d-91a3-66851678143d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.813756 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h8cnh" event={"ID":"acf9352c-2b0b-449c-a248-78a349654f65","Type":"ContainerStarted","Data":"10e40074da2ee84d9950cb7b4a508df6c15422ab6344b9f15fc3a6f816985056"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.831995 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kw2gp" podStartSLOduration=125.83197077 podStartE2EDuration="2m5.83197077s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.813918907 +0000 UTC m=+145.591300664" watchObservedRunningTime="2025-10-08 13:20:43.83197077 +0000 UTC m=+145.609352527" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.845799 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" podStartSLOduration=126.845783246 podStartE2EDuration="2m6.845783246s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.844405238 +0000 UTC m=+145.621787005" watchObservedRunningTime="2025-10-08 13:20:43.845783246 +0000 UTC m=+145.623165003" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.846233 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" event={"ID":"bba0864b-5c2f-42d3-bb43-caaaa1dc4267","Type":"ContainerStarted","Data":"0d0f019adea0b0930f01a63decde67714c2b99b63ef3dbc8d965e65dc33dc2ed"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.846365 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.847161 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.847633 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.347618897 +0000 UTC m=+146.125000654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.894673 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" event={"ID":"e227258a-5822-472d-a151-aa7c07951330","Type":"ContainerStarted","Data":"708f756f054265a35c55ed47411fbec4e13f9504f57216fdd06d38650062ee50"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.894943 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.911981 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" event={"ID":"efd7ad79-f03d-486a-88d8-8be245697463","Type":"ContainerStarted","Data":"85395fe45ec2a323cb2b03c53136f5d050dec8172b1b653693fed1eb8dd4d5a8"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.931434 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" event={"ID":"dfcc28cc-4e1d-47a3-89f7-f65d719e320a","Type":"ContainerStarted","Data":"ab10e0d76c316db8bd8320847428708a002cc257ce2ecf800a083df574e22d14"} Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.940627 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" podStartSLOduration=125.940609803 podStartE2EDuration="2m5.940609803s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.879664492 +0000 UTC m=+145.657046249" watchObservedRunningTime="2025-10-08 13:20:43.940609803 +0000 UTC m=+145.717991560" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.947100 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:43 crc kubenswrapper[5065]: E1008 13:20:43.947764 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.447745302 +0000 UTC m=+146.225127059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.987587 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h8cnh" podStartSLOduration=6.987566574 podStartE2EDuration="6.987566574s" podCreationTimestamp="2025-10-08 13:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.9526902 +0000 UTC m=+145.730071957" watchObservedRunningTime="2025-10-08 13:20:43.987566574 +0000 UTC m=+145.764948331" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.988971 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" podStartSLOduration=125.988962143 podStartE2EDuration="2m5.988962143s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:43.986907385 +0000 UTC m=+145.764289142" watchObservedRunningTime="2025-10-08 13:20:43.988962143 +0000 UTC m=+145.766343910" Oct 08 13:20:43 crc kubenswrapper[5065]: I1008 13:20:43.990075 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" event={"ID":"6ac35ef1-5519-41d1-b2a3-61b03415fbaa","Type":"ContainerStarted","Data":"6fc755e522f60033ed68c7cc0974248f90a0d619671aa90ee30ef901d394125f"} Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.016349 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" event={"ID":"42132cd2-ec8f-47e2-8011-6f39c454977f","Type":"ContainerStarted","Data":"dd372bb9f8154bb7220cdd4678d4921d919fa0cab2b5571b2467319a1e807c66"} Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.049515 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.053119 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.553099223 +0000 UTC m=+146.330481070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.065341 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mzc8r" Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.108764 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xpzqc" Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.122847 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbtng" podStartSLOduration=126.12283018 podStartE2EDuration="2m6.12283018s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:44.119794255 +0000 UTC m=+145.897176012" watchObservedRunningTime="2025-10-08 13:20:44.12283018 +0000 UTC m=+145.900211937" Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.124659 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zlcv8" podStartSLOduration=126.12465229 podStartE2EDuration="2m6.12465229s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:44.053211386 +0000 UTC m=+145.830593143" watchObservedRunningTime="2025-10-08 13:20:44.12465229 +0000 UTC m=+145.902034047" Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.151213 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.151648 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.651620333 +0000 UTC m=+146.429002080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.198023 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr8vs" podStartSLOduration=126.198003108 podStartE2EDuration="2m6.198003108s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:44.196898197 +0000 UTC m=+145.974279974" watchObservedRunningTime="2025-10-08 13:20:44.198003108 +0000 UTC m=+145.975384865" Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.252486 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.252930 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.752919791 +0000 UTC m=+146.530301548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.287142 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:44 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:44 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:44 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.287194 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.355226 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.355395 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.855370541 +0000 UTC m=+146.632752298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.355750 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.356203 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.856177473 +0000 UTC m=+146.633559230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.457939 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.458502 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:44.958402257 +0000 UTC m=+146.735784014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.563242 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.563617 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.063603983 +0000 UTC m=+146.840985740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.666645 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.667214 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.167198495 +0000 UTC m=+146.944580252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.768357 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.768751 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.26873766 +0000 UTC m=+147.046119417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.869444 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.869628 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.369604285 +0000 UTC m=+147.146986032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.869943 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.870273 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.370260484 +0000 UTC m=+147.147642241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:44 crc kubenswrapper[5065]: I1008 13:20:44.971494 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:44 crc kubenswrapper[5065]: E1008 13:20:44.971827 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.471801898 +0000 UTC m=+147.249183655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.022840 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" event={"ID":"b17ef7c4-a962-4759-9441-33d28b384b4e","Type":"ContainerStarted","Data":"2ffd39a85f6c50c3b386171f71404a21ff117c9fd5e74d715cd4666a66fd8707"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.025059 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h8cnh" event={"ID":"acf9352c-2b0b-449c-a248-78a349654f65","Type":"ContainerStarted","Data":"365e6b10f0e3d6db317eb4ed7d7b3f4323c493be084087b4746f762a75ab61a9"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.029579 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" event={"ID":"6ded5789-2ec3-42f6-8a56-b575b8fa7dfd","Type":"ContainerStarted","Data":"24101fc473db7a14d4ab4ecf5fbac0aac77297e0bc2e4e1248712342358a985f"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.031526 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" event={"ID":"5d729eca-d89f-4f52-96c1-6b8da29e9678","Type":"ContainerStarted","Data":"08ba7e18f597dc25b5bfeb485d818eca38316ff5738780a6ee31bdef9d30446d"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.033589 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" event={"ID":"c2648eeb-e556-43bd-a3de-ace83e540571","Type":"ContainerStarted","Data":"3f8868762b53cbebafa837a4ae03b4fd14c36df0573bd03ddf87015fe53aa7ef"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.035786 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" event={"ID":"07e8eaa7-dc9d-4581-a962-554de51f6137","Type":"ContainerStarted","Data":"dcc2b55ebb97a9025cfdb261d828c3518c9ca0508267c57b688f232bdefea5a0"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.037493 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" event={"ID":"6110169d-e524-4d26-a6a5-514ee5384554","Type":"ContainerStarted","Data":"d5d247502aae0faddd61754ff94aca88f90b84f58fc6a161be63cfef5244399d"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.037590 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" event={"ID":"6110169d-e524-4d26-a6a5-514ee5384554","Type":"ContainerStarted","Data":"2ee4932c1a63a52facc50248534c20e1ecef5f8d52f15332108abc3e4bb9450d"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.037995 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.039081 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" event={"ID":"de258063-13a0-4a3d-93f4-b39fd81902cb","Type":"ContainerStarted","Data":"c3b486745a3cc6767c8ef942c47d4ba8af5d9e3ecee34dfe4aeecc184f0addfc"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.040836 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" event={"ID":"e227258a-5822-472d-a151-aa7c07951330","Type":"ContainerStarted","Data":"1f2bac63ebfabd6c67efd15c9defd92412ae26fcec03cae494127530b86aa41d"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.040897 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" event={"ID":"e227258a-5822-472d-a151-aa7c07951330","Type":"ContainerStarted","Data":"747a65fed724ce1fd201751a829d40d518bfb15c69dce8ee0343bd7c9adb460c"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.041527 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nx54q" podStartSLOduration=128.041515204 podStartE2EDuration="2m8.041515204s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:44.468634112 +0000 UTC m=+146.246015859" watchObservedRunningTime="2025-10-08 13:20:45.041515204 +0000 UTC m=+146.818896961" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.042571 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" event={"ID":"696bcbce-29a9-4686-9ac0-e5af4558fc82","Type":"ContainerStarted","Data":"293556df960ad857a11af4880a46759b4a3cd67351394bcdc2d5b7dc500d6aa6"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.044728 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" event={"ID":"61abc989-efa8-41c2-ae46-1c7015e76aee","Type":"ContainerStarted","Data":"29a86fe90a3856bb0f98fb9f7b824dcda9789ac89fbba5eda54f793e107b3aac"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.044913 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" event={"ID":"61abc989-efa8-41c2-ae46-1c7015e76aee","Type":"ContainerStarted","Data":"60d2638d70f08642a45e8f1ea42426f4d7a90abeebc0c94a729acfcf22cb2cb6"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.046343 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g75m7" event={"ID":"840c1621-59f2-44e2-b60a-63f7dd6a82dd","Type":"ContainerStarted","Data":"ca0ac378fd015ddf788cfcaeeab3bd03cf4e5ddeb3031f69d6dad33e53149c60"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.046463 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g75m7" event={"ID":"840c1621-59f2-44e2-b60a-63f7dd6a82dd","Type":"ContainerStarted","Data":"074731ee85a3bf1f2a27d40941d33f028f2441221253dabeddc5b6685e55f738"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.047025 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.049020 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s52g7" event={"ID":"b92e0c1d-2733-4e94-9bf2-667b6074ebe0","Type":"ContainerStarted","Data":"11ee889a2386cca8b260a3e8215680698304f2c0a38e730a49402519724e8cc1"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.050471 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" event={"ID":"b719c48b-49ca-4947-8e2f-77523c4360ac","Type":"ContainerStarted","Data":"dae1f068e7e23270d1784bc7ffcd34c0b203381e79de5279b4107fe2c12813dd"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.052561 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd562" event={"ID":"dfcc28cc-4e1d-47a3-89f7-f65d719e320a","Type":"ContainerStarted","Data":"20a119cc5a415301bdb66e1e56f490ac337bb10a07d4911a9043b73883ecd300"} Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.053655 5065 patch_prober.go:28] interesting pod/downloads-7954f5f757-lcdtm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.053708 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lcdtm" podUID="b923335b-a2b2-4919-909d-70a6d141c798" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.053861 5065 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2xx98 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.053942 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" podUID="feb22448-6135-462d-91a3-66851678143d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.066263 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.066492 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7jrh9" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.070524 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wkr4f" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.073051 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.075602 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.575584435 +0000 UTC m=+147.352966262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.088947 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" podStartSLOduration=127.088929247 podStartE2EDuration="2m7.088929247s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.08831007 +0000 UTC m=+146.865691827" watchObservedRunningTime="2025-10-08 13:20:45.088929247 +0000 UTC m=+146.866310994" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.089178 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2b6kx" podStartSLOduration=128.089172884 podStartE2EDuration="2m8.089172884s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.043568061 +0000 UTC m=+146.820949808" watchObservedRunningTime="2025-10-08 13:20:45.089172884 +0000 UTC m=+146.866554631" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.099586 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t4zh8"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.108135 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.112735 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.114706 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4zh8"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.161892 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rk2bl" podStartSLOduration=128.161873054 podStartE2EDuration="2m8.161873054s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.160822574 +0000 UTC m=+146.938204331" watchObservedRunningTime="2025-10-08 13:20:45.161873054 +0000 UTC m=+146.939254811" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.162971 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tkqjf" podStartSLOduration=127.162962404 podStartE2EDuration="2m7.162962404s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.129472019 +0000 UTC m=+146.906853796" watchObservedRunningTime="2025-10-08 13:20:45.162962404 +0000 UTC m=+146.940344181" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.182122 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.182342 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5thl8\" (UniqueName: \"kubernetes.io/projected/b9e6f6b0-a470-4b38-9777-75f994c93fee-kube-api-access-5thl8\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.182382 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-utilities\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.182457 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-catalog-content\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.182550 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.682536361 +0000 UTC m=+147.459918118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.196233 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" podStartSLOduration=127.196214142 podStartE2EDuration="2m7.196214142s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.192794947 +0000 UTC m=+146.970176704" watchObservedRunningTime="2025-10-08 13:20:45.196214142 +0000 UTC m=+146.973595899" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.261295 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gzl88" podStartSLOduration=128.261263458 podStartE2EDuration="2m8.261263458s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.217242779 +0000 UTC m=+146.994624536" watchObservedRunningTime="2025-10-08 13:20:45.261263458 +0000 UTC m=+147.038645215" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.282136 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:45 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:45 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:45 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.282194 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.284246 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.284289 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-utilities\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.284359 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-catalog-content\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.284626 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5thl8\" (UniqueName: \"kubernetes.io/projected/b9e6f6b0-a470-4b38-9777-75f994c93fee-kube-api-access-5thl8\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.284734 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.784715253 +0000 UTC m=+147.562097010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.284966 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-utilities\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.285147 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-catalog-content\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.309355 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jhhxk" podStartSLOduration=128.309328179 podStartE2EDuration="2m8.309328179s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.265887727 +0000 UTC m=+147.043269484" watchObservedRunningTime="2025-10-08 13:20:45.309328179 +0000 UTC m=+147.086709946" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.316495 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8djr2"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.317733 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.321831 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.328926 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8djr2"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.329608 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzwhd" podStartSLOduration=127.329584764 podStartE2EDuration="2m7.329584764s" podCreationTimestamp="2025-10-08 13:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.319070481 +0000 UTC m=+147.096452238" watchObservedRunningTime="2025-10-08 13:20:45.329584764 +0000 UTC m=+147.106966521" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.340586 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5thl8\" (UniqueName: \"kubernetes.io/projected/b9e6f6b0-a470-4b38-9777-75f994c93fee-kube-api-access-5thl8\") pod \"certified-operators-t4zh8\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.388013 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.388589 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-utilities\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.388815 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-catalog-content\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.388916 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/522e90a2-47b0-4a82-9cac-9665a4e2dadc-kube-api-access-f9hd8\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.389148 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.889130276 +0000 UTC m=+147.666512043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.446032 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.493115 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-utilities\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.494539 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-catalog-content\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.494677 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/522e90a2-47b0-4a82-9cac-9665a4e2dadc-kube-api-access-f9hd8\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.494808 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.495067 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-catalog-content\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.495318 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:45.99530332 +0000 UTC m=+147.772685197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.494466 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-utilities\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.521907 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" podStartSLOduration=128.521862342 podStartE2EDuration="2m8.521862342s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.468509762 +0000 UTC m=+147.245891539" watchObservedRunningTime="2025-10-08 13:20:45.521862342 +0000 UTC m=+147.299244099" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.533470 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9h4zl"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.534373 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.540638 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/522e90a2-47b0-4a82-9cac-9665a4e2dadc-kube-api-access-f9hd8\") pod \"community-operators-8djr2\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.555791 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h4zl"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.596497 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.597662 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.097636657 +0000 UTC m=+147.875018414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.598036 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-utilities\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.598191 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjhd\" (UniqueName: \"kubernetes.io/projected/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-kube-api-access-gvjhd\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.598322 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-catalog-content\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.598466 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.598906 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.098892012 +0000 UTC m=+147.876273769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.672760 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.700001 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.700283 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-catalog-content\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.700387 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-utilities\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.700477 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjhd\" (UniqueName: \"kubernetes.io/projected/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-kube-api-access-gvjhd\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.700877 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.200857898 +0000 UTC m=+147.978239665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.701315 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-catalog-content\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.701594 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-utilities\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.710244 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-g75m7" podStartSLOduration=8.71022635 podStartE2EDuration="8.71022635s" podCreationTimestamp="2025-10-08 13:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:45.566432286 +0000 UTC m=+147.343814053" watchObservedRunningTime="2025-10-08 13:20:45.71022635 +0000 UTC m=+147.487608107" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.712008 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j4zrr"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.712877 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.753461 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j4zrr"] Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.757256 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjhd\" (UniqueName: \"kubernetes.io/projected/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-kube-api-access-gvjhd\") pod \"certified-operators-9h4zl\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.802339 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-utilities\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.802516 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-catalog-content\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.802555 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8d4c\" (UniqueName: \"kubernetes.io/projected/ab020e6c-4d15-48b9-a6c6-a23f10a35641-kube-api-access-d8d4c\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.802576 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.802871 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.302859365 +0000 UTC m=+148.080241122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.889702 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905103 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905254 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905291 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905331 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905354 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-utilities\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905371 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-catalog-content\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905389 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.905429 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8d4c\" (UniqueName: \"kubernetes.io/projected/ab020e6c-4d15-48b9-a6c6-a23f10a35641-kube-api-access-d8d4c\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: E1008 13:20:45.905799 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.405762598 +0000 UTC m=+148.183144355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.908828 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-catalog-content\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.909150 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-utilities\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.913852 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.914773 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.924183 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.929129 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:45 crc kubenswrapper[5065]: I1008 13:20:45.930439 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8d4c\" (UniqueName: \"kubernetes.io/projected/ab020e6c-4d15-48b9-a6c6-a23f10a35641-kube-api-access-d8d4c\") pod \"community-operators-j4zrr\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.009135 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.009661 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.509648808 +0000 UTC m=+148.287030565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.046765 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.062910 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" event={"ID":"5d729eca-d89f-4f52-96c1-6b8da29e9678","Type":"ContainerStarted","Data":"3a6154d6a88f03ace3787aba175076ce777ef01d95dca3db17fcaabb2c766b48"} Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.075534 5065 patch_prober.go:28] interesting pod/downloads-7954f5f757-lcdtm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.078463 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lcdtm" podUID="b923335b-a2b2-4919-909d-70a6d141c798" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.077591 5065 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2xx98 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.079121 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" podUID="feb22448-6135-462d-91a3-66851678143d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.088316 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.095677 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.117574 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.118184 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.140440 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.62040599 +0000 UTC m=+148.397787747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.221532 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.222181 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.72216557 +0000 UTC m=+148.499547327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.304639 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:46 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:46 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:46 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.304975 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.323932 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.324361 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.824343362 +0000 UTC m=+148.601725119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.330537 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4zh8"] Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.424859 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h4zl"] Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.425486 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.425873 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:46.925858406 +0000 UTC m=+148.703240163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.531045 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.531378 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:47.031360621 +0000 UTC m=+148.808742378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.634328 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.634689 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:47.134676485 +0000 UTC m=+148.912058242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.735320 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.735833 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:47.235819388 +0000 UTC m=+149.013201145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.833276 5065 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.836562 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.836935 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:47.33692013 +0000 UTC m=+149.114301887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.857839 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j4zrr"] Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.861582 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8djr2"] Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.939238 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.939490 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 13:20:47.439400951 +0000 UTC m=+149.216782718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:46 crc kubenswrapper[5065]: I1008 13:20:46.939742 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:46 crc kubenswrapper[5065]: E1008 13:20:46.940078 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 13:20:47.44006673 +0000 UTC m=+149.217448487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nnvb5" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.021528 5065 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-08T13:20:46.833302979Z","Handler":null,"Name":""} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.032972 5065 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.033007 5065 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.044348 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.057840 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.071703 5065 generic.go:334] "Generic (PLEG): container finished" podID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerID="8a796efe31f38f2970ad0074228daab9f8742da4d4d88f127ce11fe3db047391" exitCode=0 Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.071718 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4zh8" event={"ID":"b9e6f6b0-a470-4b38-9777-75f994c93fee","Type":"ContainerDied","Data":"8a796efe31f38f2970ad0074228daab9f8742da4d4d88f127ce11fe3db047391"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.071785 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4zh8" event={"ID":"b9e6f6b0-a470-4b38-9777-75f994c93fee","Type":"ContainerStarted","Data":"2072c643570e144c11ce8eb5348fa5203a285b1221b73523bf22d5bba9213f70"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.079199 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.090200 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8djr2" event={"ID":"522e90a2-47b0-4a82-9cac-9665a4e2dadc","Type":"ContainerStarted","Data":"99432c74e95d1b5a74135b93e73674b4a615716f29ee1d962e5109ecc96b5ff3"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.099808 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4zrr" event={"ID":"ab020e6c-4d15-48b9-a6c6-a23f10a35641","Type":"ContainerStarted","Data":"cc366aa1d50fea4bca3406d21126d6ec1c94989be8a0530497fb99adb73b34fc"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.107634 5065 generic.go:334] "Generic (PLEG): container finished" podID="b719c48b-49ca-4947-8e2f-77523c4360ac" containerID="dae1f068e7e23270d1784bc7ffcd34c0b203381e79de5279b4107fe2c12813dd" exitCode=0 Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.107741 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" event={"ID":"b719c48b-49ca-4947-8e2f-77523c4360ac","Type":"ContainerDied","Data":"dae1f068e7e23270d1784bc7ffcd34c0b203381e79de5279b4107fe2c12813dd"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.114992 5065 generic.go:334] "Generic (PLEG): container finished" podID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerID="3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896" exitCode=0 Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.115114 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h4zl" event={"ID":"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0","Type":"ContainerDied","Data":"3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.115152 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h4zl" event={"ID":"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0","Type":"ContainerStarted","Data":"2df21a005037dff8cc5023c48379ab5358d5f1b343df8d1e5aa14e050b2547c8"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.118136 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" event={"ID":"5d729eca-d89f-4f52-96c1-6b8da29e9678","Type":"ContainerStarted","Data":"5e201b7cec0d0303a2bf0bcf46afe357393cdb6062352875eb4f908014874c02"} Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.119825 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c9222a906f830ae2142c8ae8fbeddd3359cd4b90f9201c3fa0f7be426f55def4"} Oct 08 13:20:47 crc kubenswrapper[5065]: W1008 13:20:47.122837 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c8a650849c5896c08ac755c82e7567e06d0d2ec4cf60a85cd037c86643657d9b WatchSource:0}: Error finding container c8a650849c5896c08ac755c82e7567e06d0d2ec4cf60a85cd037c86643657d9b: Status 404 returned error can't find the container with id c8a650849c5896c08ac755c82e7567e06d0d2ec4cf60a85cd037c86643657d9b Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.145283 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.155259 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.155302 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.293560 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:47 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:47 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:47 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.294162 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.299137 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nnvb5\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.305881 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gqzmt"] Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.310008 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.312533 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.314841 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqzmt"] Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.349252 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-utilities\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.349303 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pgn8\" (UniqueName: \"kubernetes.io/projected/9769ef8d-73d1-49d6-a138-efc820a036e7-kube-api-access-8pgn8\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.349346 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-catalog-content\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.453659 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-utilities\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.453737 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pgn8\" (UniqueName: \"kubernetes.io/projected/9769ef8d-73d1-49d6-a138-efc820a036e7-kube-api-access-8pgn8\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.453783 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-catalog-content\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.454686 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-catalog-content\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.454991 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-utilities\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.480208 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pgn8\" (UniqueName: \"kubernetes.io/projected/9769ef8d-73d1-49d6-a138-efc820a036e7-kube-api-access-8pgn8\") pod \"redhat-marketplace-gqzmt\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.492071 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.698474 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b8kbv"] Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.699687 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.716337 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8kbv"] Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.728613 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.756774 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-utilities\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.756936 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jn5\" (UniqueName: \"kubernetes.io/projected/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-kube-api-access-68jn5\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.757006 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-catalog-content\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.757765 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nnvb5"] Oct 08 13:20:47 crc kubenswrapper[5065]: W1008 13:20:47.764876 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e2d2016_716c_4261_a1c0_5dbd804a65d8.slice/crio-f3e2504baaf28a95e0164c52a1c413ae2043765ae3431165a64f658184f6ab09 WatchSource:0}: Error finding container f3e2504baaf28a95e0164c52a1c413ae2043765ae3431165a64f658184f6ab09: Status 404 returned error can't find the container with id f3e2504baaf28a95e0164c52a1c413ae2043765ae3431165a64f658184f6ab09 Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.858058 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68jn5\" (UniqueName: \"kubernetes.io/projected/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-kube-api-access-68jn5\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.858256 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-catalog-content\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.858289 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-utilities\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.858756 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-utilities\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.859034 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-catalog-content\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:47 crc kubenswrapper[5065]: I1008 13:20:47.893186 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68jn5\" (UniqueName: \"kubernetes.io/projected/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-kube-api-access-68jn5\") pod \"redhat-marketplace-b8kbv\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.015112 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.148673 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ca9f2cca68c8de419e487401d60a0c2c8dc2f7c7f713c172146bcaea42af803e"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.148992 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b097f400917c31eb447adf7f529243f438832d86eac6c3ac84ad4d7a67d3ffae"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.149211 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.155546 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2e96048f5427ad976f94230714245fd18bffe58e53689088bc3b07d7b06571a8"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.176756 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" event={"ID":"5d729eca-d89f-4f52-96c1-6b8da29e9678","Type":"ContainerStarted","Data":"16366d8ddab93eff09ca4fd80d8518793361f0b8e7d6d5d12ddfedaa167bfbcd"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.186842 5065 generic.go:334] "Generic (PLEG): container finished" podID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerID="0964d421161460dcfeea10d008acf05ce5a2f050806c2b3cce1cf6365920ce7f" exitCode=0 Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.186943 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8djr2" event={"ID":"522e90a2-47b0-4a82-9cac-9665a4e2dadc","Type":"ContainerDied","Data":"0964d421161460dcfeea10d008acf05ce5a2f050806c2b3cce1cf6365920ce7f"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.198302 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7abb7d0c0cef9be65c1206456c46ae42517e9ab4296b323b2fc7e55b7ed26600"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.198349 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c8a650849c5896c08ac755c82e7567e06d0d2ec4cf60a85cd037c86643657d9b"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.203532 5065 generic.go:334] "Generic (PLEG): container finished" podID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerID="44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0" exitCode=0 Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.203603 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4zrr" event={"ID":"ab020e6c-4d15-48b9-a6c6-a23f10a35641","Type":"ContainerDied","Data":"44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.211354 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" event={"ID":"0e2d2016-716c-4261-a1c0-5dbd804a65d8","Type":"ContainerStarted","Data":"53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.211464 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.211478 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" event={"ID":"0e2d2016-716c-4261-a1c0-5dbd804a65d8","Type":"ContainerStarted","Data":"f3e2504baaf28a95e0164c52a1c413ae2043765ae3431165a64f658184f6ab09"} Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.216636 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqzmt"] Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.269201 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" podStartSLOduration=131.269186461 podStartE2EDuration="2m11.269186461s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:48.267189996 +0000 UTC m=+150.044571773" watchObservedRunningTime="2025-10-08 13:20:48.269186461 +0000 UTC m=+150.046568218" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.269787 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6sglj" podStartSLOduration=11.269783308 podStartE2EDuration="11.269783308s" podCreationTimestamp="2025-10-08 13:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:20:48.249252715 +0000 UTC m=+150.026634492" watchObservedRunningTime="2025-10-08 13:20:48.269783308 +0000 UTC m=+150.047165065" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.281606 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:48 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:48 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:48 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.281669 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.303109 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmdbk"] Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.304335 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.312528 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.316231 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmdbk"] Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.368120 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-utilities\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.368204 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-catalog-content\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.368236 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vqfn\" (UniqueName: \"kubernetes.io/projected/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-kube-api-access-8vqfn\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.403931 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8kbv"] Oct 08 13:20:48 crc kubenswrapper[5065]: W1008 13:20:48.431749 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f3b5f3_9ac9_4ebc_8343_eb6d6c6b6d5b.slice/crio-5c4a641d749d8f4212b9f8a5267521b1f27b9756feb490ce6b3367ae05a3b9c9 WatchSource:0}: Error finding container 5c4a641d749d8f4212b9f8a5267521b1f27b9756feb490ce6b3367ae05a3b9c9: Status 404 returned error can't find the container with id 5c4a641d749d8f4212b9f8a5267521b1f27b9756feb490ce6b3367ae05a3b9c9 Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.472865 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vqfn\" (UniqueName: \"kubernetes.io/projected/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-kube-api-access-8vqfn\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.472983 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-utilities\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.473137 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-catalog-content\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.473875 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-catalog-content\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.474717 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-utilities\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.492574 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vqfn\" (UniqueName: \"kubernetes.io/projected/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-kube-api-access-8vqfn\") pod \"redhat-operators-rmdbk\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.577554 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.636011 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.680892 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf7l2\" (UniqueName: \"kubernetes.io/projected/b719c48b-49ca-4947-8e2f-77523c4360ac-kube-api-access-gf7l2\") pod \"b719c48b-49ca-4947-8e2f-77523c4360ac\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.681009 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b719c48b-49ca-4947-8e2f-77523c4360ac-secret-volume\") pod \"b719c48b-49ca-4947-8e2f-77523c4360ac\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.681049 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume\") pod \"b719c48b-49ca-4947-8e2f-77523c4360ac\" (UID: \"b719c48b-49ca-4947-8e2f-77523c4360ac\") " Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.681936 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "b719c48b-49ca-4947-8e2f-77523c4360ac" (UID: "b719c48b-49ca-4947-8e2f-77523c4360ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.686977 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x6mc5"] Oct 08 13:20:48 crc kubenswrapper[5065]: E1008 13:20:48.687179 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b719c48b-49ca-4947-8e2f-77523c4360ac" containerName="collect-profiles" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.687193 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b719c48b-49ca-4947-8e2f-77523c4360ac" containerName="collect-profiles" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.687291 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b719c48b-49ca-4947-8e2f-77523c4360ac" containerName="collect-profiles" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.688013 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.688671 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b719c48b-49ca-4947-8e2f-77523c4360ac-kube-api-access-gf7l2" (OuterVolumeSpecName: "kube-api-access-gf7l2") pod "b719c48b-49ca-4947-8e2f-77523c4360ac" (UID: "b719c48b-49ca-4947-8e2f-77523c4360ac"). InnerVolumeSpecName "kube-api-access-gf7l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.688745 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b719c48b-49ca-4947-8e2f-77523c4360ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b719c48b-49ca-4947-8e2f-77523c4360ac" (UID: "b719c48b-49ca-4947-8e2f-77523c4360ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.697249 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6mc5"] Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.782103 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrgxj\" (UniqueName: \"kubernetes.io/projected/61d3554b-9ede-49cf-a968-6a42c1620a64-kube-api-access-wrgxj\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.782158 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-catalog-content\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.782209 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-utilities\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.782255 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf7l2\" (UniqueName: \"kubernetes.io/projected/b719c48b-49ca-4947-8e2f-77523c4360ac-kube-api-access-gf7l2\") on node \"crc\" DevicePath \"\"" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.782267 5065 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b719c48b-49ca-4947-8e2f-77523c4360ac-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.782275 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b719c48b-49ca-4947-8e2f-77523c4360ac-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.883286 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrgxj\" (UniqueName: \"kubernetes.io/projected/61d3554b-9ede-49cf-a968-6a42c1620a64-kube-api-access-wrgxj\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.883672 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-catalog-content\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.883736 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-utilities\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.884341 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-catalog-content\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.884376 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-utilities\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.892178 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 08 13:20:48 crc kubenswrapper[5065]: I1008 13:20:48.923690 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrgxj\" (UniqueName: \"kubernetes.io/projected/61d3554b-9ede-49cf-a968-6a42c1620a64-kube-api-access-wrgxj\") pod \"redhat-operators-x6mc5\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.013455 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.092233 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmdbk"] Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.236809 5065 generic.go:334] "Generic (PLEG): container finished" podID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerID="d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7" exitCode=0 Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.237827 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8kbv" event={"ID":"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b","Type":"ContainerDied","Data":"d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7"} Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.237852 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8kbv" event={"ID":"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b","Type":"ContainerStarted","Data":"5c4a641d749d8f4212b9f8a5267521b1f27b9756feb490ce6b3367ae05a3b9c9"} Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.247893 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmdbk" event={"ID":"c0ab2cf0-b994-4c5e-a074-fe9c56d34171","Type":"ContainerStarted","Data":"2ee3a1e77345e3611a4001cd06598d3723b2bbc65dc1b912ca132ff2bb0b7dcb"} Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.250249 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" event={"ID":"b719c48b-49ca-4947-8e2f-77523c4360ac","Type":"ContainerDied","Data":"a3909a5740342a1b4b0a8e24e95845d1243d25c8757eba2a536068cb2d571ccd"} Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.250281 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3909a5740342a1b4b0a8e24e95845d1243d25c8757eba2a536068cb2d571ccd" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.250336 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.261351 5065 generic.go:334] "Generic (PLEG): container finished" podID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerID="ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c" exitCode=0 Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.262484 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqzmt" event={"ID":"9769ef8d-73d1-49d6-a138-efc820a036e7","Type":"ContainerDied","Data":"ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c"} Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.262511 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqzmt" event={"ID":"9769ef8d-73d1-49d6-a138-efc820a036e7","Type":"ContainerStarted","Data":"898aa45d56d1883f6fb096300c5e997ce4b022e12c218173580efba80c33c065"} Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.281634 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:49 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:49 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:49 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.281700 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.387880 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6mc5"] Oct 08 13:20:49 crc kubenswrapper[5065]: W1008 13:20:49.422381 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61d3554b_9ede_49cf_a968_6a42c1620a64.slice/crio-dd6a08e077ae278306edbe1b968e5120d38b0b3913f038e2712b07bce2b62c19 WatchSource:0}: Error finding container dd6a08e077ae278306edbe1b968e5120d38b0b3913f038e2712b07bce2b62c19: Status 404 returned error can't find the container with id dd6a08e077ae278306edbe1b968e5120d38b0b3913f038e2712b07bce2b62c19 Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.578671 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.578725 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.581646 5065 patch_prober.go:28] interesting pod/console-f9d7485db-w27qr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.5:8443/health\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.581684 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-w27qr" podUID="6877d346-fa92-428a-859c-218fdfe5ca4f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.5:8443/health\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.928047 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.928130 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.934007 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.934998 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.936899 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.937059 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.943826 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:49 crc kubenswrapper[5065]: I1008 13:20:49.944040 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.011283 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.011485 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.015720 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.015766 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.022624 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.113882 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.113146 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.114331 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.137005 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.195559 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.196059 5065 patch_prober.go:28] interesting pod/downloads-7954f5f757-lcdtm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.196143 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lcdtm" podUID="b923335b-a2b2-4919-909d-70a6d141c798" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.197228 5065 patch_prober.go:28] interesting pod/downloads-7954f5f757-lcdtm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.197336 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lcdtm" podUID="b923335b-a2b2-4919-909d-70a6d141c798" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.277500 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.277766 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.278267 5065 generic.go:334] "Generic (PLEG): container finished" podID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerID="9510fb5abb76be8243b2a88f4a39888776c365ed9a7fbf2015ca26d3281277de" exitCode=0 Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.278341 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6mc5" event={"ID":"61d3554b-9ede-49cf-a968-6a42c1620a64","Type":"ContainerDied","Data":"9510fb5abb76be8243b2a88f4a39888776c365ed9a7fbf2015ca26d3281277de"} Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.278366 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6mc5" event={"ID":"61d3554b-9ede-49cf-a968-6a42c1620a64","Type":"ContainerStarted","Data":"dd6a08e077ae278306edbe1b968e5120d38b0b3913f038e2712b07bce2b62c19"} Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.281903 5065 patch_prober.go:28] interesting pod/router-default-5444994796-8zlcn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 13:20:50 crc kubenswrapper[5065]: [-]has-synced failed: reason withheld Oct 08 13:20:50 crc kubenswrapper[5065]: [+]process-running ok Oct 08 13:20:50 crc kubenswrapper[5065]: healthz check failed Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.281992 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8zlcn" podUID="28c60830-7cae-45ed-bbe5-edbb83a24e87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.283776 5065 generic.go:334] "Generic (PLEG): container finished" podID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerID="5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c" exitCode=0 Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.284352 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmdbk" event={"ID":"c0ab2cf0-b994-4c5e-a074-fe9c56d34171","Type":"ContainerDied","Data":"5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c"} Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.290917 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mrbd6" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.291834 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8t8br" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.556749 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.557548 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.572667 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.572798 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.572917 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.627081 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.627140 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.728813 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.729444 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.730096 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.755486 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.798772 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 13:20:50 crc kubenswrapper[5065]: I1008 13:20:50.900349 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:51 crc kubenswrapper[5065]: I1008 13:20:51.282787 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:51 crc kubenswrapper[5065]: I1008 13:20:51.307645 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae","Type":"ContainerStarted","Data":"a542b42606ac2389f70f1b3f2a361b70738908ccf38ec52edad90ae0098659ba"} Oct 08 13:20:51 crc kubenswrapper[5065]: I1008 13:20:51.309815 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8zlcn" Oct 08 13:20:51 crc kubenswrapper[5065]: I1008 13:20:51.484369 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 13:20:52 crc kubenswrapper[5065]: I1008 13:20:52.379923 5065 generic.go:334] "Generic (PLEG): container finished" podID="d7b40a50-a7cc-4421-9787-8fb4f51ca7ae" containerID="c8517cf865b99f704f87186411e3968e52c1b3a3dde32d083679561ade0401fb" exitCode=0 Oct 08 13:20:52 crc kubenswrapper[5065]: I1008 13:20:52.380040 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae","Type":"ContainerDied","Data":"c8517cf865b99f704f87186411e3968e52c1b3a3dde32d083679561ade0401fb"} Oct 08 13:20:52 crc kubenswrapper[5065]: I1008 13:20:52.384485 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258","Type":"ContainerStarted","Data":"7dc507b175907c4f43fdfc35bf704e1710fe2cd2f926a948188a5bd0b0584322"} Oct 08 13:20:53 crc kubenswrapper[5065]: I1008 13:20:53.393244 5065 generic.go:334] "Generic (PLEG): container finished" podID="9ca22f02-86bd-40e4-bf14-bf0dcf1f6258" containerID="f7c89637f9dd05ed26f06b5f39368c531c08b2a1abf5a1876f1d6f512ce4c57b" exitCode=0 Oct 08 13:20:53 crc kubenswrapper[5065]: I1008 13:20:53.393359 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258","Type":"ContainerDied","Data":"f7c89637f9dd05ed26f06b5f39368c531c08b2a1abf5a1876f1d6f512ce4c57b"} Oct 08 13:20:54 crc kubenswrapper[5065]: I1008 13:20:54.374847 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:20:54 crc kubenswrapper[5065]: I1008 13:20:54.374914 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:20:55 crc kubenswrapper[5065]: I1008 13:20:55.953120 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-g75m7" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.396733 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.431556 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258","Type":"ContainerDied","Data":"7dc507b175907c4f43fdfc35bf704e1710fe2cd2f926a948188a5bd0b0584322"} Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.431593 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dc507b175907c4f43fdfc35bf704e1710fe2cd2f926a948188a5bd0b0584322" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.431599 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.484472 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kubelet-dir\") pod \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.484568 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ca22f02-86bd-40e4-bf14-bf0dcf1f6258" (UID: "9ca22f02-86bd-40e4-bf14-bf0dcf1f6258"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.484636 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kube-api-access\") pod \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\" (UID: \"9ca22f02-86bd-40e4-bf14-bf0dcf1f6258\") " Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.484820 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.484910 5065 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.490460 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ca22f02-86bd-40e4-bf14-bf0dcf1f6258" (UID: "9ca22f02-86bd-40e4-bf14-bf0dcf1f6258"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.499632 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8a38e7c-bbc4-4255-ab4e-a056eb0655be-metrics-certs\") pod \"network-metrics-daemon-6nwh2\" (UID: \"c8a38e7c-bbc4-4255-ab4e-a056eb0655be\") " pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.583985 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.585639 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ca22f02-86bd-40e4-bf14-bf0dcf1f6258-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.590828 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:20:59 crc kubenswrapper[5065]: I1008 13:20:59.604211 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6nwh2" Oct 08 13:21:00 crc kubenswrapper[5065]: I1008 13:21:00.204052 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-lcdtm" Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.051563 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.107916 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kubelet-dir\") pod \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.108059 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kube-api-access\") pod \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\" (UID: \"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae\") " Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.108055 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7b40a50-a7cc-4421-9787-8fb4f51ca7ae" (UID: "d7b40a50-a7cc-4421-9787-8fb4f51ca7ae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.108337 5065 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.114352 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7b40a50-a7cc-4421-9787-8fb4f51ca7ae" (UID: "d7b40a50-a7cc-4421-9787-8fb4f51ca7ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.210374 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7b40a50-a7cc-4421-9787-8fb4f51ca7ae-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.443985 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d7b40a50-a7cc-4421-9787-8fb4f51ca7ae","Type":"ContainerDied","Data":"a542b42606ac2389f70f1b3f2a361b70738908ccf38ec52edad90ae0098659ba"} Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.444030 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 13:21:01 crc kubenswrapper[5065]: I1008 13:21:01.444032 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a542b42606ac2389f70f1b3f2a361b70738908ccf38ec52edad90ae0098659ba" Oct 08 13:21:07 crc kubenswrapper[5065]: I1008 13:21:07.508710 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:21:17 crc kubenswrapper[5065]: E1008 13:21:17.771892 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 08 13:21:17 crc kubenswrapper[5065]: E1008 13:21:17.772466 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9hd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8djr2_openshift-marketplace(522e90a2-47b0-4a82-9cac-9665a4e2dadc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 13:21:17 crc kubenswrapper[5065]: E1008 13:21:17.773674 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8djr2" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" Oct 08 13:21:20 crc kubenswrapper[5065]: I1008 13:21:20.527026 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-94t24" Oct 08 13:21:20 crc kubenswrapper[5065]: E1008 13:21:20.608070 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8djr2" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" Oct 08 13:21:20 crc kubenswrapper[5065]: E1008 13:21:20.683528 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 08 13:21:20 crc kubenswrapper[5065]: E1008 13:21:20.683713 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vqfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rmdbk_openshift-marketplace(c0ab2cf0-b994-4c5e-a074-fe9c56d34171): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 13:21:20 crc kubenswrapper[5065]: E1008 13:21:20.684833 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rmdbk" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" Oct 08 13:21:20 crc kubenswrapper[5065]: E1008 13:21:20.693718 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 08 13:21:20 crc kubenswrapper[5065]: E1008 13:21:20.693803 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8d4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j4zrr_openshift-marketplace(ab020e6c-4d15-48b9-a6c6-a23f10a35641): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 13:21:20 crc kubenswrapper[5065]: E1008 13:21:20.694975 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j4zrr" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" Oct 08 13:21:21 crc kubenswrapper[5065]: E1008 13:21:21.826504 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j4zrr" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" Oct 08 13:21:21 crc kubenswrapper[5065]: E1008 13:21:21.826589 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rmdbk" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" Oct 08 13:21:21 crc kubenswrapper[5065]: E1008 13:21:21.945381 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 08 13:21:21 crc kubenswrapper[5065]: E1008 13:21:21.946372 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5thl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t4zh8_openshift-marketplace(b9e6f6b0-a470-4b38-9777-75f994c93fee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 13:21:21 crc kubenswrapper[5065]: E1008 13:21:21.947519 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t4zh8" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" Oct 08 13:21:22 crc kubenswrapper[5065]: I1008 13:21:22.020482 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6nwh2"] Oct 08 13:21:22 crc kubenswrapper[5065]: W1008 13:21:22.035577 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8a38e7c_bbc4_4255_ab4e_a056eb0655be.slice/crio-0fa9344090d4a3c9a16cd80d269dd97640760be3c9f4d7f794cf40c3cce4625c WatchSource:0}: Error finding container 0fa9344090d4a3c9a16cd80d269dd97640760be3c9f4d7f794cf40c3cce4625c: Status 404 returned error can't find the container with id 0fa9344090d4a3c9a16cd80d269dd97640760be3c9f4d7f794cf40c3cce4625c Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.493739 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.494214 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvjhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9h4zl_openshift-marketplace(f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.496055 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9h4zl" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" Oct 08 13:21:22 crc kubenswrapper[5065]: I1008 13:21:22.565374 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" event={"ID":"c8a38e7c-bbc4-4255-ab4e-a056eb0655be","Type":"ContainerStarted","Data":"3233801cff5198494c4d27663b19ca6ada86dc7ac0b4ca42171b7953cf9ff6f5"} Oct 08 13:21:22 crc kubenswrapper[5065]: I1008 13:21:22.565443 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" event={"ID":"c8a38e7c-bbc4-4255-ab4e-a056eb0655be","Type":"ContainerStarted","Data":"0fa9344090d4a3c9a16cd80d269dd97640760be3c9f4d7f794cf40c3cce4625c"} Oct 08 13:21:22 crc kubenswrapper[5065]: I1008 13:21:22.568001 5065 generic.go:334] "Generic (PLEG): container finished" podID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerID="e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96" exitCode=0 Oct 08 13:21:22 crc kubenswrapper[5065]: I1008 13:21:22.568075 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8kbv" event={"ID":"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b","Type":"ContainerDied","Data":"e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96"} Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.594758 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-t4zh8" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.602643 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9h4zl" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.649733 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.649921 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pgn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gqzmt_openshift-marketplace(9769ef8d-73d1-49d6-a138-efc820a036e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 13:21:22 crc kubenswrapper[5065]: E1008 13:21:22.651939 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gqzmt" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" Oct 08 13:21:23 crc kubenswrapper[5065]: I1008 13:21:23.574748 5065 generic.go:334] "Generic (PLEG): container finished" podID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerID="3aa0cad9f5feacf8fe99db8e82e6ae6e0a07a6ad669b084cf9bb8021e5571dbd" exitCode=0 Oct 08 13:21:23 crc kubenswrapper[5065]: I1008 13:21:23.574867 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6mc5" event={"ID":"61d3554b-9ede-49cf-a968-6a42c1620a64","Type":"ContainerDied","Data":"3aa0cad9f5feacf8fe99db8e82e6ae6e0a07a6ad669b084cf9bb8021e5571dbd"} Oct 08 13:21:23 crc kubenswrapper[5065]: I1008 13:21:23.578603 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6nwh2" event={"ID":"c8a38e7c-bbc4-4255-ab4e-a056eb0655be","Type":"ContainerStarted","Data":"12545cb4727e89a04eaa60bd5d0e0f2b5283263468d2e309b28e74798ea531ca"} Oct 08 13:21:23 crc kubenswrapper[5065]: E1008 13:21:23.618458 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gqzmt" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" Oct 08 13:21:23 crc kubenswrapper[5065]: I1008 13:21:23.623074 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6nwh2" podStartSLOduration=166.623053599 podStartE2EDuration="2m46.623053599s" podCreationTimestamp="2025-10-08 13:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:21:23.622777371 +0000 UTC m=+185.400159148" watchObservedRunningTime="2025-10-08 13:21:23.623053599 +0000 UTC m=+185.400435356" Oct 08 13:21:24 crc kubenswrapper[5065]: I1008 13:21:24.376034 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:21:24 crc kubenswrapper[5065]: I1008 13:21:24.376132 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:21:24 crc kubenswrapper[5065]: I1008 13:21:24.587959 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8kbv" event={"ID":"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b","Type":"ContainerStarted","Data":"5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3"} Oct 08 13:21:24 crc kubenswrapper[5065]: I1008 13:21:24.614770 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b8kbv" podStartSLOduration=3.235509919 podStartE2EDuration="37.614749612s" podCreationTimestamp="2025-10-08 13:20:47 +0000 UTC" firstStartedPulling="2025-10-08 13:20:49.242129739 +0000 UTC m=+151.019511496" lastFinishedPulling="2025-10-08 13:21:23.621369432 +0000 UTC m=+185.398751189" observedRunningTime="2025-10-08 13:21:24.610628487 +0000 UTC m=+186.388010254" watchObservedRunningTime="2025-10-08 13:21:24.614749612 +0000 UTC m=+186.392131379" Oct 08 13:21:25 crc kubenswrapper[5065]: I1008 13:21:25.595956 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6mc5" event={"ID":"61d3554b-9ede-49cf-a968-6a42c1620a64","Type":"ContainerStarted","Data":"785aa4512827f8eda2a54384610ccfffd5d242098358e2b4f78a5f3cad76d0af"} Oct 08 13:21:25 crc kubenswrapper[5065]: I1008 13:21:25.619086 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x6mc5" podStartSLOduration=3.255719795 podStartE2EDuration="37.619063135s" podCreationTimestamp="2025-10-08 13:20:48 +0000 UTC" firstStartedPulling="2025-10-08 13:20:50.283616591 +0000 UTC m=+152.060998358" lastFinishedPulling="2025-10-08 13:21:24.646959941 +0000 UTC m=+186.424341698" observedRunningTime="2025-10-08 13:21:25.618682995 +0000 UTC m=+187.396064762" watchObservedRunningTime="2025-10-08 13:21:25.619063135 +0000 UTC m=+187.396444902" Oct 08 13:21:26 crc kubenswrapper[5065]: I1008 13:21:26.109033 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 13:21:28 crc kubenswrapper[5065]: I1008 13:21:28.015747 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:21:28 crc kubenswrapper[5065]: I1008 13:21:28.016906 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:21:28 crc kubenswrapper[5065]: I1008 13:21:28.146531 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:21:28 crc kubenswrapper[5065]: I1008 13:21:28.651034 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:21:29 crc kubenswrapper[5065]: I1008 13:21:29.013509 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:21:29 crc kubenswrapper[5065]: I1008 13:21:29.013856 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:21:29 crc kubenswrapper[5065]: I1008 13:21:29.786487 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8kbv"] Oct 08 13:21:30 crc kubenswrapper[5065]: I1008 13:21:30.055314 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x6mc5" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="registry-server" probeResult="failure" output=< Oct 08 13:21:30 crc kubenswrapper[5065]: timeout: failed to connect service ":50051" within 1s Oct 08 13:21:30 crc kubenswrapper[5065]: > Oct 08 13:21:31 crc kubenswrapper[5065]: I1008 13:21:31.624585 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b8kbv" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="registry-server" containerID="cri-o://5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3" gracePeriod=2 Oct 08 13:21:31 crc kubenswrapper[5065]: I1008 13:21:31.956605 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.016972 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68jn5\" (UniqueName: \"kubernetes.io/projected/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-kube-api-access-68jn5\") pod \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.017019 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-catalog-content\") pod \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.017157 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-utilities\") pod \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\" (UID: \"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b\") " Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.017897 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-utilities" (OuterVolumeSpecName: "utilities") pod "f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" (UID: "f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.018625 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.023554 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-kube-api-access-68jn5" (OuterVolumeSpecName: "kube-api-access-68jn5") pod "f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" (UID: "f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b"). InnerVolumeSpecName "kube-api-access-68jn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.030959 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" (UID: "f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.120014 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68jn5\" (UniqueName: \"kubernetes.io/projected/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-kube-api-access-68jn5\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.120046 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.632401 5065 generic.go:334] "Generic (PLEG): container finished" podID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerID="5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3" exitCode=0 Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.632469 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8kbv" event={"ID":"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b","Type":"ContainerDied","Data":"5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3"} Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.632502 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b8kbv" event={"ID":"f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b","Type":"ContainerDied","Data":"5c4a641d749d8f4212b9f8a5267521b1f27b9756feb490ce6b3367ae05a3b9c9"} Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.632528 5065 scope.go:117] "RemoveContainer" containerID="5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.632523 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b8kbv" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.668382 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8kbv"] Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.670573 5065 scope.go:117] "RemoveContainer" containerID="e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.672169 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b8kbv"] Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.698796 5065 scope.go:117] "RemoveContainer" containerID="d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.721218 5065 scope.go:117] "RemoveContainer" containerID="5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3" Oct 08 13:21:32 crc kubenswrapper[5065]: E1008 13:21:32.721738 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3\": container with ID starting with 5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3 not found: ID does not exist" containerID="5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.721774 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3"} err="failed to get container status \"5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3\": rpc error: code = NotFound desc = could not find container \"5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3\": container with ID starting with 5b31364d5449a2b83a64d56856d513325b6bd460faea181476bd84796f1392b3 not found: ID does not exist" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.721817 5065 scope.go:117] "RemoveContainer" containerID="e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96" Oct 08 13:21:32 crc kubenswrapper[5065]: E1008 13:21:32.722173 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96\": container with ID starting with e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96 not found: ID does not exist" containerID="e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.722199 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96"} err="failed to get container status \"e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96\": rpc error: code = NotFound desc = could not find container \"e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96\": container with ID starting with e84a3d2e16ac7b40a38c9ddfdb4625e31734f5265da19ab56dd029785011cb96 not found: ID does not exist" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.722211 5065 scope.go:117] "RemoveContainer" containerID="d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7" Oct 08 13:21:32 crc kubenswrapper[5065]: E1008 13:21:32.722542 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7\": container with ID starting with d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7 not found: ID does not exist" containerID="d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.722570 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7"} err="failed to get container status \"d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7\": rpc error: code = NotFound desc = could not find container \"d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7\": container with ID starting with d5cc4319909b14dd296f4f566065f7d7b01af7ceeeb962e6e1f2db284039a1b7 not found: ID does not exist" Oct 08 13:21:32 crc kubenswrapper[5065]: I1008 13:21:32.883692 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" path="/var/lib/kubelet/pods/f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b/volumes" Oct 08 13:21:33 crc kubenswrapper[5065]: I1008 13:21:33.640194 5065 generic.go:334] "Generic (PLEG): container finished" podID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerID="ba58d3081e51f06d94c046011406ab9b2c124b6b1cb5f20ef9466ab83a90255a" exitCode=0 Oct 08 13:21:33 crc kubenswrapper[5065]: I1008 13:21:33.640267 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8djr2" event={"ID":"522e90a2-47b0-4a82-9cac-9665a4e2dadc","Type":"ContainerDied","Data":"ba58d3081e51f06d94c046011406ab9b2c124b6b1cb5f20ef9466ab83a90255a"} Oct 08 13:21:35 crc kubenswrapper[5065]: I1008 13:21:35.658632 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8djr2" event={"ID":"522e90a2-47b0-4a82-9cac-9665a4e2dadc","Type":"ContainerStarted","Data":"3d318c08f5b895af8cdc11a62c894b7e3c6078c197caef49a31ef237bc21a342"} Oct 08 13:21:35 crc kubenswrapper[5065]: I1008 13:21:35.674859 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:21:35 crc kubenswrapper[5065]: I1008 13:21:35.674989 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:21:35 crc kubenswrapper[5065]: I1008 13:21:35.678083 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8djr2" podStartSLOduration=3.399331605 podStartE2EDuration="50.678036023s" podCreationTimestamp="2025-10-08 13:20:45 +0000 UTC" firstStartedPulling="2025-10-08 13:20:48.191463682 +0000 UTC m=+149.968845439" lastFinishedPulling="2025-10-08 13:21:35.47016809 +0000 UTC m=+197.247549857" observedRunningTime="2025-10-08 13:21:35.675623254 +0000 UTC m=+197.453005021" watchObservedRunningTime="2025-10-08 13:21:35.678036023 +0000 UTC m=+197.455417780" Oct 08 13:21:36 crc kubenswrapper[5065]: I1008 13:21:36.666835 5065 generic.go:334] "Generic (PLEG): container finished" podID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerID="51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed" exitCode=0 Oct 08 13:21:36 crc kubenswrapper[5065]: I1008 13:21:36.666914 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4zrr" event={"ID":"ab020e6c-4d15-48b9-a6c6-a23f10a35641","Type":"ContainerDied","Data":"51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed"} Oct 08 13:21:36 crc kubenswrapper[5065]: I1008 13:21:36.710061 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8djr2" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="registry-server" probeResult="failure" output=< Oct 08 13:21:36 crc kubenswrapper[5065]: timeout: failed to connect service ":50051" within 1s Oct 08 13:21:36 crc kubenswrapper[5065]: > Oct 08 13:21:37 crc kubenswrapper[5065]: I1008 13:21:37.675631 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmdbk" event={"ID":"c0ab2cf0-b994-4c5e-a074-fe9c56d34171","Type":"ContainerStarted","Data":"8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066"} Oct 08 13:21:37 crc kubenswrapper[5065]: I1008 13:21:37.678917 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4zrr" event={"ID":"ab020e6c-4d15-48b9-a6c6-a23f10a35641","Type":"ContainerStarted","Data":"09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161"} Oct 08 13:21:37 crc kubenswrapper[5065]: I1008 13:21:37.681623 5065 generic.go:334] "Generic (PLEG): container finished" podID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerID="06d9e66c4fbe4f7fa8324332b271b63d727308c0c602b92538c30deacfe1a0a0" exitCode=0 Oct 08 13:21:37 crc kubenswrapper[5065]: I1008 13:21:37.681668 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4zh8" event={"ID":"b9e6f6b0-a470-4b38-9777-75f994c93fee","Type":"ContainerDied","Data":"06d9e66c4fbe4f7fa8324332b271b63d727308c0c602b92538c30deacfe1a0a0"} Oct 08 13:21:37 crc kubenswrapper[5065]: I1008 13:21:37.718822 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j4zrr" podStartSLOduration=3.487426715 podStartE2EDuration="52.718795764s" podCreationTimestamp="2025-10-08 13:20:45 +0000 UTC" firstStartedPulling="2025-10-08 13:20:48.209519086 +0000 UTC m=+149.986900843" lastFinishedPulling="2025-10-08 13:21:37.440888135 +0000 UTC m=+199.218269892" observedRunningTime="2025-10-08 13:21:37.715095398 +0000 UTC m=+199.492477165" watchObservedRunningTime="2025-10-08 13:21:37.718795764 +0000 UTC m=+199.496177521" Oct 08 13:21:38 crc kubenswrapper[5065]: I1008 13:21:38.689160 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4zh8" event={"ID":"b9e6f6b0-a470-4b38-9777-75f994c93fee","Type":"ContainerStarted","Data":"70ad73d301a9129ca6dabad8d4c0e1664a87b46f4098f76cba14b128a41b810f"} Oct 08 13:21:38 crc kubenswrapper[5065]: I1008 13:21:38.691571 5065 generic.go:334] "Generic (PLEG): container finished" podID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerID="8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066" exitCode=0 Oct 08 13:21:38 crc kubenswrapper[5065]: I1008 13:21:38.691623 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmdbk" event={"ID":"c0ab2cf0-b994-4c5e-a074-fe9c56d34171","Type":"ContainerDied","Data":"8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066"} Oct 08 13:21:38 crc kubenswrapper[5065]: I1008 13:21:38.709245 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t4zh8" podStartSLOduration=2.570070501 podStartE2EDuration="53.709229626s" podCreationTimestamp="2025-10-08 13:20:45 +0000 UTC" firstStartedPulling="2025-10-08 13:20:47.078943196 +0000 UTC m=+148.856324943" lastFinishedPulling="2025-10-08 13:21:38.218102301 +0000 UTC m=+199.995484068" observedRunningTime="2025-10-08 13:21:38.708255108 +0000 UTC m=+200.485636865" watchObservedRunningTime="2025-10-08 13:21:38.709229626 +0000 UTC m=+200.486611383" Oct 08 13:21:39 crc kubenswrapper[5065]: I1008 13:21:39.082934 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:21:39 crc kubenswrapper[5065]: I1008 13:21:39.132026 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:21:42 crc kubenswrapper[5065]: I1008 13:21:42.386029 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6mc5"] Oct 08 13:21:42 crc kubenswrapper[5065]: I1008 13:21:42.386876 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x6mc5" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="registry-server" containerID="cri-o://785aa4512827f8eda2a54384610ccfffd5d242098358e2b4f78a5f3cad76d0af" gracePeriod=2 Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.723407 5065 generic.go:334] "Generic (PLEG): container finished" podID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerID="785aa4512827f8eda2a54384610ccfffd5d242098358e2b4f78a5f3cad76d0af" exitCode=0 Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.723698 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6mc5" event={"ID":"61d3554b-9ede-49cf-a968-6a42c1620a64","Type":"ContainerDied","Data":"785aa4512827f8eda2a54384610ccfffd5d242098358e2b4f78a5f3cad76d0af"} Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.767300 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.900586 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrgxj\" (UniqueName: \"kubernetes.io/projected/61d3554b-9ede-49cf-a968-6a42c1620a64-kube-api-access-wrgxj\") pod \"61d3554b-9ede-49cf-a968-6a42c1620a64\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.900696 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-catalog-content\") pod \"61d3554b-9ede-49cf-a968-6a42c1620a64\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.900724 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-utilities\") pod \"61d3554b-9ede-49cf-a968-6a42c1620a64\" (UID: \"61d3554b-9ede-49cf-a968-6a42c1620a64\") " Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.902511 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-utilities" (OuterVolumeSpecName: "utilities") pod "61d3554b-9ede-49cf-a968-6a42c1620a64" (UID: "61d3554b-9ede-49cf-a968-6a42c1620a64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.907574 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d3554b-9ede-49cf-a968-6a42c1620a64-kube-api-access-wrgxj" (OuterVolumeSpecName: "kube-api-access-wrgxj") pod "61d3554b-9ede-49cf-a968-6a42c1620a64" (UID: "61d3554b-9ede-49cf-a968-6a42c1620a64"). InnerVolumeSpecName "kube-api-access-wrgxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:21:44 crc kubenswrapper[5065]: I1008 13:21:44.989101 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61d3554b-9ede-49cf-a968-6a42c1620a64" (UID: "61d3554b-9ede-49cf-a968-6a42c1620a64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.002243 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrgxj\" (UniqueName: \"kubernetes.io/projected/61d3554b-9ede-49cf-a968-6a42c1620a64-kube-api-access-wrgxj\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.002511 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.002626 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61d3554b-9ede-49cf-a968-6a42c1620a64-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.447101 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.448059 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.491761 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.733174 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.740015 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmdbk" event={"ID":"c0ab2cf0-b994-4c5e-a074-fe9c56d34171","Type":"ContainerStarted","Data":"ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b"} Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.742106 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqzmt" event={"ID":"9769ef8d-73d1-49d6-a138-efc820a036e7","Type":"ContainerStarted","Data":"6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268"} Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.744548 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h4zl" event={"ID":"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0","Type":"ContainerStarted","Data":"c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270"} Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.760216 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6mc5" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.761884 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6mc5" event={"ID":"61d3554b-9ede-49cf-a968-6a42c1620a64","Type":"ContainerDied","Data":"dd6a08e077ae278306edbe1b968e5120d38b0b3913f038e2712b07bce2b62c19"} Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.761949 5065 scope.go:117] "RemoveContainer" containerID="785aa4512827f8eda2a54384610ccfffd5d242098358e2b4f78a5f3cad76d0af" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.776697 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.790578 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmdbk" podStartSLOduration=2.557974401 podStartE2EDuration="57.790560004s" podCreationTimestamp="2025-10-08 13:20:48 +0000 UTC" firstStartedPulling="2025-10-08 13:20:50.297422407 +0000 UTC m=+152.074804164" lastFinishedPulling="2025-10-08 13:21:45.53000801 +0000 UTC m=+207.307389767" observedRunningTime="2025-10-08 13:21:45.790473962 +0000 UTC m=+207.567855719" watchObservedRunningTime="2025-10-08 13:21:45.790560004 +0000 UTC m=+207.567941761" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.818369 5065 scope.go:117] "RemoveContainer" containerID="3aa0cad9f5feacf8fe99db8e82e6ae6e0a07a6ad669b084cf9bb8021e5571dbd" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.823378 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.832468 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6mc5"] Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.835548 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x6mc5"] Oct 08 13:21:45 crc kubenswrapper[5065]: I1008 13:21:45.849630 5065 scope.go:117] "RemoveContainer" containerID="9510fb5abb76be8243b2a88f4a39888776c365ed9a7fbf2015ca26d3281277de" Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.047664 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.047903 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.093436 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.767642 5065 generic.go:334] "Generic (PLEG): container finished" podID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerID="6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268" exitCode=0 Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.767739 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqzmt" event={"ID":"9769ef8d-73d1-49d6-a138-efc820a036e7","Type":"ContainerDied","Data":"6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268"} Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.771199 5065 generic.go:334] "Generic (PLEG): container finished" podID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerID="c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270" exitCode=0 Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.771351 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h4zl" event={"ID":"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0","Type":"ContainerDied","Data":"c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270"} Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.822670 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:21:46 crc kubenswrapper[5065]: I1008 13:21:46.880741 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" path="/var/lib/kubelet/pods/61d3554b-9ede-49cf-a968-6a42c1620a64/volumes" Oct 08 13:21:47 crc kubenswrapper[5065]: I1008 13:21:47.779689 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqzmt" event={"ID":"9769ef8d-73d1-49d6-a138-efc820a036e7","Type":"ContainerStarted","Data":"e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4"} Oct 08 13:21:47 crc kubenswrapper[5065]: I1008 13:21:47.782466 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h4zl" event={"ID":"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0","Type":"ContainerStarted","Data":"8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7"} Oct 08 13:21:47 crc kubenswrapper[5065]: I1008 13:21:47.819229 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9h4zl" podStartSLOduration=2.526513428 podStartE2EDuration="1m2.819213339s" podCreationTimestamp="2025-10-08 13:20:45 +0000 UTC" firstStartedPulling="2025-10-08 13:20:47.116407162 +0000 UTC m=+148.893788919" lastFinishedPulling="2025-10-08 13:21:47.409107073 +0000 UTC m=+209.186488830" observedRunningTime="2025-10-08 13:21:47.816199333 +0000 UTC m=+209.593581090" watchObservedRunningTime="2025-10-08 13:21:47.819213339 +0000 UTC m=+209.596595096" Oct 08 13:21:47 crc kubenswrapper[5065]: I1008 13:21:47.819376 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gqzmt" podStartSLOduration=2.7200875350000002 podStartE2EDuration="1m0.819372273s" podCreationTimestamp="2025-10-08 13:20:47 +0000 UTC" firstStartedPulling="2025-10-08 13:20:49.264950876 +0000 UTC m=+151.042332633" lastFinishedPulling="2025-10-08 13:21:47.364235614 +0000 UTC m=+209.141617371" observedRunningTime="2025-10-08 13:21:47.801385941 +0000 UTC m=+209.578767698" watchObservedRunningTime="2025-10-08 13:21:47.819372273 +0000 UTC m=+209.596754030" Oct 08 13:21:47 crc kubenswrapper[5065]: I1008 13:21:47.985205 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j4zrr"] Oct 08 13:21:48 crc kubenswrapper[5065]: I1008 13:21:48.636934 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:21:48 crc kubenswrapper[5065]: I1008 13:21:48.636996 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:21:49 crc kubenswrapper[5065]: I1008 13:21:49.486402 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8gdt7"] Oct 08 13:21:49 crc kubenswrapper[5065]: I1008 13:21:49.677458 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rmdbk" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="registry-server" probeResult="failure" output=< Oct 08 13:21:49 crc kubenswrapper[5065]: timeout: failed to connect service ":50051" within 1s Oct 08 13:21:49 crc kubenswrapper[5065]: > Oct 08 13:21:49 crc kubenswrapper[5065]: I1008 13:21:49.791804 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j4zrr" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="registry-server" containerID="cri-o://09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161" gracePeriod=2 Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.227016 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.374175 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-catalog-content\") pod \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.374535 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-utilities\") pod \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.374619 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8d4c\" (UniqueName: \"kubernetes.io/projected/ab020e6c-4d15-48b9-a6c6-a23f10a35641-kube-api-access-d8d4c\") pod \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\" (UID: \"ab020e6c-4d15-48b9-a6c6-a23f10a35641\") " Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.375607 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-utilities" (OuterVolumeSpecName: "utilities") pod "ab020e6c-4d15-48b9-a6c6-a23f10a35641" (UID: "ab020e6c-4d15-48b9-a6c6-a23f10a35641"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.380263 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab020e6c-4d15-48b9-a6c6-a23f10a35641-kube-api-access-d8d4c" (OuterVolumeSpecName: "kube-api-access-d8d4c") pod "ab020e6c-4d15-48b9-a6c6-a23f10a35641" (UID: "ab020e6c-4d15-48b9-a6c6-a23f10a35641"). InnerVolumeSpecName "kube-api-access-d8d4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.423335 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab020e6c-4d15-48b9-a6c6-a23f10a35641" (UID: "ab020e6c-4d15-48b9-a6c6-a23f10a35641"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.475526 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8d4c\" (UniqueName: \"kubernetes.io/projected/ab020e6c-4d15-48b9-a6c6-a23f10a35641-kube-api-access-d8d4c\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.475554 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.475563 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab020e6c-4d15-48b9-a6c6-a23f10a35641-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.799313 5065 generic.go:334] "Generic (PLEG): container finished" podID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerID="09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161" exitCode=0 Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.799384 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4zrr" event={"ID":"ab020e6c-4d15-48b9-a6c6-a23f10a35641","Type":"ContainerDied","Data":"09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161"} Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.799446 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4zrr" event={"ID":"ab020e6c-4d15-48b9-a6c6-a23f10a35641","Type":"ContainerDied","Data":"cc366aa1d50fea4bca3406d21126d6ec1c94989be8a0530497fb99adb73b34fc"} Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.799469 5065 scope.go:117] "RemoveContainer" containerID="09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.799465 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4zrr" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.816491 5065 scope.go:117] "RemoveContainer" containerID="51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.831245 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j4zrr"] Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.835252 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j4zrr"] Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.853321 5065 scope.go:117] "RemoveContainer" containerID="44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.866250 5065 scope.go:117] "RemoveContainer" containerID="09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161" Oct 08 13:21:50 crc kubenswrapper[5065]: E1008 13:21:50.866641 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161\": container with ID starting with 09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161 not found: ID does not exist" containerID="09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.866681 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161"} err="failed to get container status \"09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161\": rpc error: code = NotFound desc = could not find container \"09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161\": container with ID starting with 09d63fb0775444ed8e1a2e81ef291f5fbdbd150a5d7f0f1ec0df2be9ad1a0161 not found: ID does not exist" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.866710 5065 scope.go:117] "RemoveContainer" containerID="51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed" Oct 08 13:21:50 crc kubenswrapper[5065]: E1008 13:21:50.867065 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed\": container with ID starting with 51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed not found: ID does not exist" containerID="51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.867095 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed"} err="failed to get container status \"51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed\": rpc error: code = NotFound desc = could not find container \"51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed\": container with ID starting with 51cdd4967113bc66eef47809972bb7e0d3d05d49174738a5688abaff29c609ed not found: ID does not exist" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.867115 5065 scope.go:117] "RemoveContainer" containerID="44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0" Oct 08 13:21:50 crc kubenswrapper[5065]: E1008 13:21:50.867584 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0\": container with ID starting with 44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0 not found: ID does not exist" containerID="44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.867611 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0"} err="failed to get container status \"44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0\": rpc error: code = NotFound desc = could not find container \"44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0\": container with ID starting with 44919f81701cf15065591ec1729d4c067be3fbf52495fb497ad277399ade8bd0 not found: ID does not exist" Oct 08 13:21:50 crc kubenswrapper[5065]: I1008 13:21:50.879724 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" path="/var/lib/kubelet/pods/ab020e6c-4d15-48b9-a6c6-a23f10a35641/volumes" Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.375206 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.375565 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.375609 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.376222 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.376272 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c" gracePeriod=600 Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.823222 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c" exitCode=0 Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.823272 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c"} Oct 08 13:21:54 crc kubenswrapper[5065]: I1008 13:21:54.823953 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"6b6ece119e94ac9da615f168a7039d14cba16573f0741f84acc41e64424ae388"} Oct 08 13:21:55 crc kubenswrapper[5065]: I1008 13:21:55.890441 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:21:55 crc kubenswrapper[5065]: I1008 13:21:55.890872 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:21:55 crc kubenswrapper[5065]: I1008 13:21:55.928347 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:21:56 crc kubenswrapper[5065]: I1008 13:21:56.878447 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:21:57 crc kubenswrapper[5065]: I1008 13:21:57.730169 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:21:57 crc kubenswrapper[5065]: I1008 13:21:57.730323 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:21:57 crc kubenswrapper[5065]: I1008 13:21:57.778940 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:21:57 crc kubenswrapper[5065]: I1008 13:21:57.785577 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h4zl"] Oct 08 13:21:57 crc kubenswrapper[5065]: I1008 13:21:57.874971 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:21:58 crc kubenswrapper[5065]: I1008 13:21:58.694324 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:21:58 crc kubenswrapper[5065]: I1008 13:21:58.736614 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:21:58 crc kubenswrapper[5065]: I1008 13:21:58.842581 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9h4zl" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="registry-server" containerID="cri-o://8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7" gracePeriod=2 Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.226478 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.389768 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvjhd\" (UniqueName: \"kubernetes.io/projected/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-kube-api-access-gvjhd\") pod \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.389858 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-catalog-content\") pod \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.389945 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-utilities\") pod \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\" (UID: \"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0\") " Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.390986 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-utilities" (OuterVolumeSpecName: "utilities") pod "f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" (UID: "f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.391229 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.398394 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-kube-api-access-gvjhd" (OuterVolumeSpecName: "kube-api-access-gvjhd") pod "f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" (UID: "f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0"). InnerVolumeSpecName "kube-api-access-gvjhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.442370 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" (UID: "f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.492226 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvjhd\" (UniqueName: \"kubernetes.io/projected/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-kube-api-access-gvjhd\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.492288 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.848264 5065 generic.go:334] "Generic (PLEG): container finished" podID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerID="8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7" exitCode=0 Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.848323 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h4zl" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.848342 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h4zl" event={"ID":"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0","Type":"ContainerDied","Data":"8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7"} Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.848922 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h4zl" event={"ID":"f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0","Type":"ContainerDied","Data":"2df21a005037dff8cc5023c48379ab5358d5f1b343df8d1e5aa14e050b2547c8"} Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.848941 5065 scope.go:117] "RemoveContainer" containerID="8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.863909 5065 scope.go:117] "RemoveContainer" containerID="c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.875461 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h4zl"] Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.878834 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9h4zl"] Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.890449 5065 scope.go:117] "RemoveContainer" containerID="3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.904368 5065 scope.go:117] "RemoveContainer" containerID="8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7" Oct 08 13:21:59 crc kubenswrapper[5065]: E1008 13:21:59.904892 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7\": container with ID starting with 8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7 not found: ID does not exist" containerID="8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.904928 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7"} err="failed to get container status \"8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7\": rpc error: code = NotFound desc = could not find container \"8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7\": container with ID starting with 8bf99f2ca2d72c60cc33298baf6d112da030935dc2e63aad6bbe90ea8e1f2df7 not found: ID does not exist" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.904949 5065 scope.go:117] "RemoveContainer" containerID="c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270" Oct 08 13:21:59 crc kubenswrapper[5065]: E1008 13:21:59.905333 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270\": container with ID starting with c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270 not found: ID does not exist" containerID="c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.905360 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270"} err="failed to get container status \"c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270\": rpc error: code = NotFound desc = could not find container \"c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270\": container with ID starting with c1feb02554472020df8832836bc690062c516094525706cab38e993bbd53c270 not found: ID does not exist" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.905375 5065 scope.go:117] "RemoveContainer" containerID="3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896" Oct 08 13:21:59 crc kubenswrapper[5065]: E1008 13:21:59.905710 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896\": container with ID starting with 3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896 not found: ID does not exist" containerID="3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896" Oct 08 13:21:59 crc kubenswrapper[5065]: I1008 13:21:59.905760 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896"} err="failed to get container status \"3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896\": rpc error: code = NotFound desc = could not find container \"3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896\": container with ID starting with 3c0644f651773120b932bf410bd201013b9c876eccea373b37deca1f4eeb2896 not found: ID does not exist" Oct 08 13:22:00 crc kubenswrapper[5065]: I1008 13:22:00.880265 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" path="/var/lib/kubelet/pods/f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0/volumes" Oct 08 13:22:14 crc kubenswrapper[5065]: I1008 13:22:14.529430 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" podUID="e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" containerName="oauth-openshift" containerID="cri-o://faeae51c893a92143f092f27a0e80a2a18995f279383180fcf24ad8e93dfddc6" gracePeriod=15 Oct 08 13:22:14 crc kubenswrapper[5065]: I1008 13:22:14.929312 5065 generic.go:334] "Generic (PLEG): container finished" podID="e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" containerID="faeae51c893a92143f092f27a0e80a2a18995f279383180fcf24ad8e93dfddc6" exitCode=0 Oct 08 13:22:14 crc kubenswrapper[5065]: I1008 13:22:14.929436 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" event={"ID":"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57","Type":"ContainerDied","Data":"faeae51c893a92143f092f27a0e80a2a18995f279383180fcf24ad8e93dfddc6"} Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.005002 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.046209 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5f95b94cb-rn7qr"] Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.046983 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.047093 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.047173 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.047251 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.047332 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.047438 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.047536 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.047621 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.047704 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.047777 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.047848 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.047916 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.047996 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca22f02-86bd-40e4-bf14-bf0dcf1f6258" containerName="pruner" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.048078 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca22f02-86bd-40e4-bf14-bf0dcf1f6258" containerName="pruner" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.048159 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.048230 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.048307 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" containerName="oauth-openshift" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.048379 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" containerName="oauth-openshift" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.048499 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.048584 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.048671 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.048743 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.048830 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.048901 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="extract-content" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.048980 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049049 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="extract-utilities" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.049109 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b40a50-a7cc-4421-9787-8fb4f51ca7ae" containerName="pruner" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049164 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b40a50-a7cc-4421-9787-8fb4f51ca7ae" containerName="pruner" Oct 08 13:22:15 crc kubenswrapper[5065]: E1008 13:22:15.049218 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049268 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049456 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f176fc83-c863-4d1d-ba3d-bbcf7b6d6cf0" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049536 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f3b5f3-9ac9-4ebc-8343-eb6d6c6b6d5b" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049599 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" containerName="oauth-openshift" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049686 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca22f02-86bd-40e4-bf14-bf0dcf1f6258" containerName="pruner" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049754 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab020e6c-4d15-48b9-a6c6-a23f10a35641" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049809 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d3554b-9ede-49cf-a968-6a42c1620a64" containerName="registry-server" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.049869 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b40a50-a7cc-4421-9787-8fb4f51ca7ae" containerName="pruner" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.050530 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.051504 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f95b94cb-rn7qr"] Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193021 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-serving-cert\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193077 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-router-certs\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193127 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-trusted-ca-bundle\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193157 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-session\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193184 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7fv\" (UniqueName: \"kubernetes.io/projected/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-kube-api-access-7t7fv\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193211 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-error\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193235 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-policies\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193259 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-idp-0-file-data\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193286 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-service-ca\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193307 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-ocp-branding-template\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193334 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-provider-selection\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193357 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-dir\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193507 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-login\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.193555 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-cliconfig\") pod \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\" (UID: \"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57\") " Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.194737 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.194754 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.194771 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.194829 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.194787 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.199454 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.199712 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.199908 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-audit-dir\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.199994 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-login\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200042 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200097 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-audit-policies\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200147 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-error\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200188 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200249 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200319 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200294 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-kube-api-access-7t7fv" (OuterVolumeSpecName: "kube-api-access-7t7fv") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "kube-api-access-7t7fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200369 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200541 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200707 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-session\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200778 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200900 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r4l\" (UniqueName: \"kubernetes.io/projected/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-kube-api-access-q9r4l\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.200995 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201176 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201209 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201233 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201263 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7fv\" (UniqueName: \"kubernetes.io/projected/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-kube-api-access-7t7fv\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201283 5065 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201301 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201327 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.201347 5065 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.203871 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.205867 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.206903 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.208569 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.208944 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.209126 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" (UID: "e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302305 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302377 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302411 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302443 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302462 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302484 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-session\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302505 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302773 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9r4l\" (UniqueName: \"kubernetes.io/projected/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-kube-api-access-q9r4l\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302817 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302850 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-audit-dir\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302873 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-login\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302892 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.302914 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-audit-policies\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303019 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-error\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303057 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303068 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303078 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303089 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303099 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303110 5065 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.303666 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-audit-dir\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.304155 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.304344 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.304542 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.304625 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-audit-policies\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.306856 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.307109 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.307223 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-session\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.307468 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.308737 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.308843 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-login\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.308986 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.309400 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-v4-0-config-user-template-error\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.319890 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9r4l\" (UniqueName: \"kubernetes.io/projected/a04f52f4-4bb5-4e8a-97c5-1b1f26754855-kube-api-access-q9r4l\") pod \"oauth-openshift-5f95b94cb-rn7qr\" (UID: \"a04f52f4-4bb5-4e8a-97c5-1b1f26754855\") " pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.368935 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.788650 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f95b94cb-rn7qr"] Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.936101 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" event={"ID":"a04f52f4-4bb5-4e8a-97c5-1b1f26754855","Type":"ContainerStarted","Data":"46793ffecaeb3dca44cbed314cbffb8cc6422292bd32a2fbb66dc1e96318a228"} Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.938595 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" event={"ID":"e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57","Type":"ContainerDied","Data":"de50af3204eaffa548b6b17fcd7317006763bc3c29c5317eaed9a4bc03dd6505"} Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.938631 5065 scope.go:117] "RemoveContainer" containerID="faeae51c893a92143f092f27a0e80a2a18995f279383180fcf24ad8e93dfddc6" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.938682 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8gdt7" Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.965555 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8gdt7"] Oct 08 13:22:15 crc kubenswrapper[5065]: I1008 13:22:15.967803 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8gdt7"] Oct 08 13:22:16 crc kubenswrapper[5065]: I1008 13:22:16.882350 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57" path="/var/lib/kubelet/pods/e3cfa1ec-ffaf-432a-a5d4-70bcf0896c57/volumes" Oct 08 13:22:16 crc kubenswrapper[5065]: I1008 13:22:16.947230 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" event={"ID":"a04f52f4-4bb5-4e8a-97c5-1b1f26754855","Type":"ContainerStarted","Data":"c836a7ae0353ee22b4efae694b5877a2328c04ec26966b744847cf3533943b88"} Oct 08 13:22:16 crc kubenswrapper[5065]: I1008 13:22:16.947461 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:16 crc kubenswrapper[5065]: I1008 13:22:16.954260 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" Oct 08 13:22:16 crc kubenswrapper[5065]: I1008 13:22:16.972887 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5f95b94cb-rn7qr" podStartSLOduration=27.972869645 podStartE2EDuration="27.972869645s" podCreationTimestamp="2025-10-08 13:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:22:16.968984634 +0000 UTC m=+238.746366411" watchObservedRunningTime="2025-10-08 13:22:16.972869645 +0000 UTC m=+238.750251402" Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.970144 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4zh8"] Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.972050 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t4zh8" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="registry-server" containerID="cri-o://70ad73d301a9129ca6dabad8d4c0e1664a87b46f4098f76cba14b128a41b810f" gracePeriod=30 Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.976809 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8djr2"] Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.977024 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8djr2" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="registry-server" containerID="cri-o://3d318c08f5b895af8cdc11a62c894b7e3c6078c197caef49a31ef237bc21a342" gracePeriod=30 Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.985069 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xx98"] Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.985264 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" podUID="feb22448-6135-462d-91a3-66851678143d" containerName="marketplace-operator" containerID="cri-o://84ea7bfb55712596c27c98dfea0bb9525b4de6f8f1df17fa71733af1a0dbc7c0" gracePeriod=30 Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.993788 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqzmt"] Oct 08 13:22:44 crc kubenswrapper[5065]: I1008 13:22:44.994648 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gqzmt" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="registry-server" containerID="cri-o://e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4" gracePeriod=30 Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.004249 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmdbk"] Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.004773 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rmdbk" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="registry-server" containerID="cri-o://ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b" gracePeriod=30 Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.006699 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwmng"] Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.007442 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.029677 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwmng"] Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.054823 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vcnl\" (UniqueName: \"kubernetes.io/projected/3dc7f2a9-f40a-4142-b340-eef8a51a976c-kube-api-access-7vcnl\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.054872 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3dc7f2a9-f40a-4142-b340-eef8a51a976c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.054917 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3dc7f2a9-f40a-4142-b340-eef8a51a976c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.108231 5065 generic.go:334] "Generic (PLEG): container finished" podID="feb22448-6135-462d-91a3-66851678143d" containerID="84ea7bfb55712596c27c98dfea0bb9525b4de6f8f1df17fa71733af1a0dbc7c0" exitCode=0 Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.108277 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" event={"ID":"feb22448-6135-462d-91a3-66851678143d","Type":"ContainerDied","Data":"84ea7bfb55712596c27c98dfea0bb9525b4de6f8f1df17fa71733af1a0dbc7c0"} Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.116907 5065 generic.go:334] "Generic (PLEG): container finished" podID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerID="70ad73d301a9129ca6dabad8d4c0e1664a87b46f4098f76cba14b128a41b810f" exitCode=0 Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.117063 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4zh8" event={"ID":"b9e6f6b0-a470-4b38-9777-75f994c93fee","Type":"ContainerDied","Data":"70ad73d301a9129ca6dabad8d4c0e1664a87b46f4098f76cba14b128a41b810f"} Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.123187 5065 generic.go:334] "Generic (PLEG): container finished" podID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerID="3d318c08f5b895af8cdc11a62c894b7e3c6078c197caef49a31ef237bc21a342" exitCode=0 Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.123243 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8djr2" event={"ID":"522e90a2-47b0-4a82-9cac-9665a4e2dadc","Type":"ContainerDied","Data":"3d318c08f5b895af8cdc11a62c894b7e3c6078c197caef49a31ef237bc21a342"} Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.156313 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3dc7f2a9-f40a-4142-b340-eef8a51a976c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.156463 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vcnl\" (UniqueName: \"kubernetes.io/projected/3dc7f2a9-f40a-4142-b340-eef8a51a976c-kube-api-access-7vcnl\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.156520 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3dc7f2a9-f40a-4142-b340-eef8a51a976c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.158505 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3dc7f2a9-f40a-4142-b340-eef8a51a976c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.168272 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3dc7f2a9-f40a-4142-b340-eef8a51a976c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.177473 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vcnl\" (UniqueName: \"kubernetes.io/projected/3dc7f2a9-f40a-4142-b340-eef8a51a976c-kube-api-access-7vcnl\") pod \"marketplace-operator-79b997595-vwmng\" (UID: \"3dc7f2a9-f40a-4142-b340-eef8a51a976c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.322035 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.405828 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.432569 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.439529 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.506663 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.517822 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575336 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/522e90a2-47b0-4a82-9cac-9665a4e2dadc-kube-api-access-f9hd8\") pod \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575447 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-utilities\") pod \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575510 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feb22448-6135-462d-91a3-66851678143d-marketplace-trusted-ca\") pod \"feb22448-6135-462d-91a3-66851678143d\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575540 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-catalog-content\") pod \"9769ef8d-73d1-49d6-a138-efc820a036e7\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575749 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-catalog-content\") pod \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575854 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-catalog-content\") pod \"b9e6f6b0-a470-4b38-9777-75f994c93fee\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575881 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vqfn\" (UniqueName: \"kubernetes.io/projected/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-kube-api-access-8vqfn\") pod \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\" (UID: \"c0ab2cf0-b994-4c5e-a074-fe9c56d34171\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575906 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqf6x\" (UniqueName: \"kubernetes.io/projected/feb22448-6135-462d-91a3-66851678143d-kube-api-access-tqf6x\") pod \"feb22448-6135-462d-91a3-66851678143d\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575928 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-utilities\") pod \"b9e6f6b0-a470-4b38-9777-75f994c93fee\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575954 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5thl8\" (UniqueName: \"kubernetes.io/projected/b9e6f6b0-a470-4b38-9777-75f994c93fee-kube-api-access-5thl8\") pod \"b9e6f6b0-a470-4b38-9777-75f994c93fee\" (UID: \"b9e6f6b0-a470-4b38-9777-75f994c93fee\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.575977 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/feb22448-6135-462d-91a3-66851678143d-marketplace-operator-metrics\") pod \"feb22448-6135-462d-91a3-66851678143d\" (UID: \"feb22448-6135-462d-91a3-66851678143d\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.576023 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pgn8\" (UniqueName: \"kubernetes.io/projected/9769ef8d-73d1-49d6-a138-efc820a036e7-kube-api-access-8pgn8\") pod \"9769ef8d-73d1-49d6-a138-efc820a036e7\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.576052 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-utilities\") pod \"9769ef8d-73d1-49d6-a138-efc820a036e7\" (UID: \"9769ef8d-73d1-49d6-a138-efc820a036e7\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.576071 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-catalog-content\") pod \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.576091 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-utilities\") pod \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\" (UID: \"522e90a2-47b0-4a82-9cac-9665a4e2dadc\") " Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.577228 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-utilities" (OuterVolumeSpecName: "utilities") pod "c0ab2cf0-b994-4c5e-a074-fe9c56d34171" (UID: "c0ab2cf0-b994-4c5e-a074-fe9c56d34171"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.579117 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.579132 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-utilities" (OuterVolumeSpecName: "utilities") pod "b9e6f6b0-a470-4b38-9777-75f994c93fee" (UID: "b9e6f6b0-a470-4b38-9777-75f994c93fee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.579967 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-utilities" (OuterVolumeSpecName: "utilities") pod "522e90a2-47b0-4a82-9cac-9665a4e2dadc" (UID: "522e90a2-47b0-4a82-9cac-9665a4e2dadc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.581326 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-utilities" (OuterVolumeSpecName: "utilities") pod "9769ef8d-73d1-49d6-a138-efc820a036e7" (UID: "9769ef8d-73d1-49d6-a138-efc820a036e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.582651 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-kube-api-access-8vqfn" (OuterVolumeSpecName: "kube-api-access-8vqfn") pod "c0ab2cf0-b994-4c5e-a074-fe9c56d34171" (UID: "c0ab2cf0-b994-4c5e-a074-fe9c56d34171"). InnerVolumeSpecName "kube-api-access-8vqfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.583497 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb22448-6135-462d-91a3-66851678143d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "feb22448-6135-462d-91a3-66851678143d" (UID: "feb22448-6135-462d-91a3-66851678143d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.584954 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9769ef8d-73d1-49d6-a138-efc820a036e7-kube-api-access-8pgn8" (OuterVolumeSpecName: "kube-api-access-8pgn8") pod "9769ef8d-73d1-49d6-a138-efc820a036e7" (UID: "9769ef8d-73d1-49d6-a138-efc820a036e7"). InnerVolumeSpecName "kube-api-access-8pgn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.586278 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb22448-6135-462d-91a3-66851678143d-kube-api-access-tqf6x" (OuterVolumeSpecName: "kube-api-access-tqf6x") pod "feb22448-6135-462d-91a3-66851678143d" (UID: "feb22448-6135-462d-91a3-66851678143d"). InnerVolumeSpecName "kube-api-access-tqf6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.586759 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e6f6b0-a470-4b38-9777-75f994c93fee-kube-api-access-5thl8" (OuterVolumeSpecName: "kube-api-access-5thl8") pod "b9e6f6b0-a470-4b38-9777-75f994c93fee" (UID: "b9e6f6b0-a470-4b38-9777-75f994c93fee"). InnerVolumeSpecName "kube-api-access-5thl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.587506 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb22448-6135-462d-91a3-66851678143d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "feb22448-6135-462d-91a3-66851678143d" (UID: "feb22448-6135-462d-91a3-66851678143d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.596781 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522e90a2-47b0-4a82-9cac-9665a4e2dadc-kube-api-access-f9hd8" (OuterVolumeSpecName: "kube-api-access-f9hd8") pod "522e90a2-47b0-4a82-9cac-9665a4e2dadc" (UID: "522e90a2-47b0-4a82-9cac-9665a4e2dadc"). InnerVolumeSpecName "kube-api-access-f9hd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.615718 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9769ef8d-73d1-49d6-a138-efc820a036e7" (UID: "9769ef8d-73d1-49d6-a138-efc820a036e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.645177 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9e6f6b0-a470-4b38-9777-75f994c93fee" (UID: "b9e6f6b0-a470-4b38-9777-75f994c93fee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.655010 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "522e90a2-47b0-4a82-9cac-9665a4e2dadc" (UID: "522e90a2-47b0-4a82-9cac-9665a4e2dadc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680054 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680090 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vqfn\" (UniqueName: \"kubernetes.io/projected/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-kube-api-access-8vqfn\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680103 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e6f6b0-a470-4b38-9777-75f994c93fee-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680112 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqf6x\" (UniqueName: \"kubernetes.io/projected/feb22448-6135-462d-91a3-66851678143d-kube-api-access-tqf6x\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680120 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5thl8\" (UniqueName: \"kubernetes.io/projected/b9e6f6b0-a470-4b38-9777-75f994c93fee-kube-api-access-5thl8\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680129 5065 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/feb22448-6135-462d-91a3-66851678143d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680139 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pgn8\" (UniqueName: \"kubernetes.io/projected/9769ef8d-73d1-49d6-a138-efc820a036e7-kube-api-access-8pgn8\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680147 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680155 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680163 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/522e90a2-47b0-4a82-9cac-9665a4e2dadc-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680173 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/522e90a2-47b0-4a82-9cac-9665a4e2dadc-kube-api-access-f9hd8\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680181 5065 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feb22448-6135-462d-91a3-66851678143d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680189 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9769ef8d-73d1-49d6-a138-efc820a036e7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.680234 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0ab2cf0-b994-4c5e-a074-fe9c56d34171" (UID: "c0ab2cf0-b994-4c5e-a074-fe9c56d34171"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.800907 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab2cf0-b994-4c5e-a074-fe9c56d34171-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:22:45 crc kubenswrapper[5065]: I1008 13:22:45.833819 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vwmng"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.130555 5065 generic.go:334] "Generic (PLEG): container finished" podID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerID="ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b" exitCode=0 Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.130658 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmdbk" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.130655 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmdbk" event={"ID":"c0ab2cf0-b994-4c5e-a074-fe9c56d34171","Type":"ContainerDied","Data":"ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.131057 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmdbk" event={"ID":"c0ab2cf0-b994-4c5e-a074-fe9c56d34171","Type":"ContainerDied","Data":"2ee3a1e77345e3611a4001cd06598d3723b2bbc65dc1b912ca132ff2bb0b7dcb"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.131075 5065 scope.go:117] "RemoveContainer" containerID="ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.133118 5065 generic.go:334] "Generic (PLEG): container finished" podID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerID="e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4" exitCode=0 Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.133230 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqzmt" event={"ID":"9769ef8d-73d1-49d6-a138-efc820a036e7","Type":"ContainerDied","Data":"e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.133269 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqzmt" event={"ID":"9769ef8d-73d1-49d6-a138-efc820a036e7","Type":"ContainerDied","Data":"898aa45d56d1883f6fb096300c5e997ce4b022e12c218173580efba80c33c065"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.133335 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqzmt" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.135306 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" event={"ID":"feb22448-6135-462d-91a3-66851678143d","Type":"ContainerDied","Data":"50b7dc40260dfc3341b497dd06cf6e8344a72b2cb8393f58abe0041f2debeae8"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.135394 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xx98" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.137542 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" event={"ID":"3dc7f2a9-f40a-4142-b340-eef8a51a976c","Type":"ContainerStarted","Data":"f384b05ba3ae168bb91bcaebf35df286fa02db8ac596a0a9f094fb0a4fc6ad35"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.137588 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" event={"ID":"3dc7f2a9-f40a-4142-b340-eef8a51a976c","Type":"ContainerStarted","Data":"e9606d7825a39f375a36678a13de04ea2e7112a1a8010839e83e6ca674a5dd24"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.137719 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.139682 5065 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vwmng container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.139724 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" podUID="3dc7f2a9-f40a-4142-b340-eef8a51a976c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.140677 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4zh8" event={"ID":"b9e6f6b0-a470-4b38-9777-75f994c93fee","Type":"ContainerDied","Data":"2072c643570e144c11ce8eb5348fa5203a285b1221b73523bf22d5bba9213f70"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.140751 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4zh8" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.145750 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8djr2" event={"ID":"522e90a2-47b0-4a82-9cac-9665a4e2dadc","Type":"ContainerDied","Data":"99432c74e95d1b5a74135b93e73674b4a615716f29ee1d962e5109ecc96b5ff3"} Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.145889 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8djr2" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.171645 5065 scope.go:117] "RemoveContainer" containerID="8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.188258 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" podStartSLOduration=2.188241167 podStartE2EDuration="2.188241167s" podCreationTimestamp="2025-10-08 13:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:22:46.16191652 +0000 UTC m=+267.939298297" watchObservedRunningTime="2025-10-08 13:22:46.188241167 +0000 UTC m=+267.965622924" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.189174 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqzmt"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.191663 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqzmt"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.197358 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xx98"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.202072 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xx98"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.209001 5065 scope.go:117] "RemoveContainer" containerID="5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.216878 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8djr2"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.223924 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8djr2"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.228067 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmdbk"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.231786 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rmdbk"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.235594 5065 scope.go:117] "RemoveContainer" containerID="ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b" Oct 08 13:22:46 crc kubenswrapper[5065]: E1008 13:22:46.237816 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b\": container with ID starting with ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b not found: ID does not exist" containerID="ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.237866 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b"} err="failed to get container status \"ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b\": rpc error: code = NotFound desc = could not find container \"ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b\": container with ID starting with ac5f7f3a0cb05e0d0abf4ba3e0632a06bc30818b0dd7b6308e7e4ca0459b3d3b not found: ID does not exist" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.237898 5065 scope.go:117] "RemoveContainer" containerID="8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066" Oct 08 13:22:46 crc kubenswrapper[5065]: E1008 13:22:46.238423 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066\": container with ID starting with 8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066 not found: ID does not exist" containerID="8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.238452 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066"} err="failed to get container status \"8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066\": rpc error: code = NotFound desc = could not find container \"8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066\": container with ID starting with 8608a7e56aba6ce1ad0cd29e7d37c4e8bfc749009b7f48e74c0d39d761a0e066 not found: ID does not exist" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.238470 5065 scope.go:117] "RemoveContainer" containerID="5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c" Oct 08 13:22:46 crc kubenswrapper[5065]: E1008 13:22:46.238815 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c\": container with ID starting with 5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c not found: ID does not exist" containerID="5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.238839 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c"} err="failed to get container status \"5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c\": rpc error: code = NotFound desc = could not find container \"5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c\": container with ID starting with 5e729d305a3cc3670c70587cf6e83be26e67288042ddeb2bd549d0b776096c8c not found: ID does not exist" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.238856 5065 scope.go:117] "RemoveContainer" containerID="e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.241405 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4zh8"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.247649 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t4zh8"] Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.254554 5065 scope.go:117] "RemoveContainer" containerID="6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.276106 5065 scope.go:117] "RemoveContainer" containerID="ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.296183 5065 scope.go:117] "RemoveContainer" containerID="e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4" Oct 08 13:22:46 crc kubenswrapper[5065]: E1008 13:22:46.297098 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4\": container with ID starting with e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4 not found: ID does not exist" containerID="e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.297129 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4"} err="failed to get container status \"e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4\": rpc error: code = NotFound desc = could not find container \"e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4\": container with ID starting with e1d66d54514918eb9bb20ebe19dc20be6241913ea50d1a4cce2dd7f2da5a7ee4 not found: ID does not exist" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.297151 5065 scope.go:117] "RemoveContainer" containerID="6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268" Oct 08 13:22:46 crc kubenswrapper[5065]: E1008 13:22:46.297355 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268\": container with ID starting with 6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268 not found: ID does not exist" containerID="6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.297378 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268"} err="failed to get container status \"6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268\": rpc error: code = NotFound desc = could not find container \"6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268\": container with ID starting with 6ec9720446aabdaba81f26e012011d06ce473c67b5d4a0e1362db2356d01b268 not found: ID does not exist" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.297396 5065 scope.go:117] "RemoveContainer" containerID="ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c" Oct 08 13:22:46 crc kubenswrapper[5065]: E1008 13:22:46.297698 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c\": container with ID starting with ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c not found: ID does not exist" containerID="ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.297726 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c"} err="failed to get container status \"ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c\": rpc error: code = NotFound desc = could not find container \"ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c\": container with ID starting with ad12221fcadd7700899a2098909bad668364e6ff90d626862bd4a8557a2dcc5c not found: ID does not exist" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.297741 5065 scope.go:117] "RemoveContainer" containerID="84ea7bfb55712596c27c98dfea0bb9525b4de6f8f1df17fa71733af1a0dbc7c0" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.330720 5065 scope.go:117] "RemoveContainer" containerID="70ad73d301a9129ca6dabad8d4c0e1664a87b46f4098f76cba14b128a41b810f" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.345339 5065 scope.go:117] "RemoveContainer" containerID="06d9e66c4fbe4f7fa8324332b271b63d727308c0c602b92538c30deacfe1a0a0" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.366013 5065 scope.go:117] "RemoveContainer" containerID="8a796efe31f38f2970ad0074228daab9f8742da4d4d88f127ce11fe3db047391" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.383918 5065 scope.go:117] "RemoveContainer" containerID="3d318c08f5b895af8cdc11a62c894b7e3c6078c197caef49a31ef237bc21a342" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.401747 5065 scope.go:117] "RemoveContainer" containerID="ba58d3081e51f06d94c046011406ab9b2c124b6b1cb5f20ef9466ab83a90255a" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.420209 5065 scope.go:117] "RemoveContainer" containerID="0964d421161460dcfeea10d008acf05ce5a2f050806c2b3cce1cf6365920ce7f" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.880355 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" path="/var/lib/kubelet/pods/522e90a2-47b0-4a82-9cac-9665a4e2dadc/volumes" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.881046 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" path="/var/lib/kubelet/pods/9769ef8d-73d1-49d6-a138-efc820a036e7/volumes" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.881971 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" path="/var/lib/kubelet/pods/b9e6f6b0-a470-4b38-9777-75f994c93fee/volumes" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.883200 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" path="/var/lib/kubelet/pods/c0ab2cf0-b994-4c5e-a074-fe9c56d34171/volumes" Oct 08 13:22:46 crc kubenswrapper[5065]: I1008 13:22:46.883928 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb22448-6135-462d-91a3-66851678143d" path="/var/lib/kubelet/pods/feb22448-6135-462d-91a3-66851678143d/volumes" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.157008 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vwmng" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187457 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxs2"] Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187639 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187651 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187660 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187666 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187675 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187680 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187691 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb22448-6135-462d-91a3-66851678143d" containerName="marketplace-operator" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187697 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb22448-6135-462d-91a3-66851678143d" containerName="marketplace-operator" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187707 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187712 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187722 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187727 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187735 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187741 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187747 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187753 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187759 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187765 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="extract-content" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187776 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187783 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187790 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187796 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187804 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187809 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="extract-utilities" Oct 08 13:22:47 crc kubenswrapper[5065]: E1008 13:22:47.187816 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187822 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187899 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="522e90a2-47b0-4a82-9cac-9665a4e2dadc" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187907 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9769ef8d-73d1-49d6-a138-efc820a036e7" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187914 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ab2cf0-b994-4c5e-a074-fe9c56d34171" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187922 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e6f6b0-a470-4b38-9777-75f994c93fee" containerName="registry-server" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.187932 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb22448-6135-462d-91a3-66851678143d" containerName="marketplace-operator" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.188563 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.193867 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.201807 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxs2"] Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.324407 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5zm\" (UniqueName: \"kubernetes.io/projected/f3c95649-b562-43e6-ba51-25625b9df60f-kube-api-access-cj5zm\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.324528 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c95649-b562-43e6-ba51-25625b9df60f-catalog-content\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.324582 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c95649-b562-43e6-ba51-25625b9df60f-utilities\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.384297 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l9jt2"] Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.386486 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.389744 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.397747 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9jt2"] Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425209 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c95649-b562-43e6-ba51-25625b9df60f-catalog-content\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425281 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c95649-b562-43e6-ba51-25625b9df60f-utilities\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425310 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-utilities\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425331 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-catalog-content\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425407 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5zm\" (UniqueName: \"kubernetes.io/projected/f3c95649-b562-43e6-ba51-25625b9df60f-kube-api-access-cj5zm\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425450 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85pn\" (UniqueName: \"kubernetes.io/projected/b23003ae-9c21-40d9-ad7c-f92806581aa9-kube-api-access-t85pn\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425768 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c95649-b562-43e6-ba51-25625b9df60f-utilities\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.425887 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c95649-b562-43e6-ba51-25625b9df60f-catalog-content\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.446640 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5zm\" (UniqueName: \"kubernetes.io/projected/f3c95649-b562-43e6-ba51-25625b9df60f-kube-api-access-cj5zm\") pod \"redhat-marketplace-wlxs2\" (UID: \"f3c95649-b562-43e6-ba51-25625b9df60f\") " pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.509186 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.526115 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-utilities\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.526172 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-catalog-content\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.526284 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t85pn\" (UniqueName: \"kubernetes.io/projected/b23003ae-9c21-40d9-ad7c-f92806581aa9-kube-api-access-t85pn\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.526629 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-utilities\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.526732 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-catalog-content\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.544948 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85pn\" (UniqueName: \"kubernetes.io/projected/b23003ae-9c21-40d9-ad7c-f92806581aa9-kube-api-access-t85pn\") pod \"redhat-operators-l9jt2\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.681826 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxs2"] Oct 08 13:22:47 crc kubenswrapper[5065]: W1008 13:22:47.686592 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c95649_b562_43e6_ba51_25625b9df60f.slice/crio-1e328f98dbf965a6fc87f39b65630026458a79bebdaa97042dfbf0b886f7d968 WatchSource:0}: Error finding container 1e328f98dbf965a6fc87f39b65630026458a79bebdaa97042dfbf0b886f7d968: Status 404 returned error can't find the container with id 1e328f98dbf965a6fc87f39b65630026458a79bebdaa97042dfbf0b886f7d968 Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.713912 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:47 crc kubenswrapper[5065]: I1008 13:22:47.876050 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9jt2"] Oct 08 13:22:47 crc kubenswrapper[5065]: W1008 13:22:47.888650 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23003ae_9c21_40d9_ad7c_f92806581aa9.slice/crio-cee70a45d5b267b2ec61953c49260194b0c004e5aacafe9a0f7fa3eaa2c3e216 WatchSource:0}: Error finding container cee70a45d5b267b2ec61953c49260194b0c004e5aacafe9a0f7fa3eaa2c3e216: Status 404 returned error can't find the container with id cee70a45d5b267b2ec61953c49260194b0c004e5aacafe9a0f7fa3eaa2c3e216 Oct 08 13:22:48 crc kubenswrapper[5065]: I1008 13:22:48.160717 5065 generic.go:334] "Generic (PLEG): container finished" podID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerID="09ca15ebba4d88814d9dbb429aa01c338f614fc506ea26cab790807dd5268131" exitCode=0 Oct 08 13:22:48 crc kubenswrapper[5065]: I1008 13:22:48.160802 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9jt2" event={"ID":"b23003ae-9c21-40d9-ad7c-f92806581aa9","Type":"ContainerDied","Data":"09ca15ebba4d88814d9dbb429aa01c338f614fc506ea26cab790807dd5268131"} Oct 08 13:22:48 crc kubenswrapper[5065]: I1008 13:22:48.162140 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9jt2" event={"ID":"b23003ae-9c21-40d9-ad7c-f92806581aa9","Type":"ContainerStarted","Data":"cee70a45d5b267b2ec61953c49260194b0c004e5aacafe9a0f7fa3eaa2c3e216"} Oct 08 13:22:48 crc kubenswrapper[5065]: I1008 13:22:48.162493 5065 generic.go:334] "Generic (PLEG): container finished" podID="f3c95649-b562-43e6-ba51-25625b9df60f" containerID="ed5bcd4ade497d281b794cfa39ffab5ed7dbd126cc744b970eb8f59f00b33556" exitCode=0 Oct 08 13:22:48 crc kubenswrapper[5065]: I1008 13:22:48.162781 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxs2" event={"ID":"f3c95649-b562-43e6-ba51-25625b9df60f","Type":"ContainerDied","Data":"ed5bcd4ade497d281b794cfa39ffab5ed7dbd126cc744b970eb8f59f00b33556"} Oct 08 13:22:48 crc kubenswrapper[5065]: I1008 13:22:48.162811 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxs2" event={"ID":"f3c95649-b562-43e6-ba51-25625b9df60f","Type":"ContainerStarted","Data":"1e328f98dbf965a6fc87f39b65630026458a79bebdaa97042dfbf0b886f7d968"} Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.172552 5065 generic.go:334] "Generic (PLEG): container finished" podID="f3c95649-b562-43e6-ba51-25625b9df60f" containerID="1c8eea20ce9f51992bb0baf5d0c98bbe3164ba5bc56f6b6309186877ddfb5fa7" exitCode=0 Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.172589 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxs2" event={"ID":"f3c95649-b562-43e6-ba51-25625b9df60f","Type":"ContainerDied","Data":"1c8eea20ce9f51992bb0baf5d0c98bbe3164ba5bc56f6b6309186877ddfb5fa7"} Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.582555 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9bzzz"] Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.583749 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.586191 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.598324 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9bzzz"] Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.754320 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-utilities\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.754716 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrd7\" (UniqueName: \"kubernetes.io/projected/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-kube-api-access-tgrd7\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.754751 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-catalog-content\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.791241 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-twtsk"] Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.792208 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.794012 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.796176 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twtsk"] Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.855368 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-catalog-content\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.855471 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663bba2d-3cb7-4ead-889f-062b9ffc9a61-catalog-content\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.855548 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8cb6\" (UniqueName: \"kubernetes.io/projected/663bba2d-3cb7-4ead-889f-062b9ffc9a61-kube-api-access-d8cb6\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.855578 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-utilities\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.855611 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrd7\" (UniqueName: \"kubernetes.io/projected/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-kube-api-access-tgrd7\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.855637 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663bba2d-3cb7-4ead-889f-062b9ffc9a61-utilities\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.856127 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-catalog-content\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.856778 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-utilities\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.885398 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrd7\" (UniqueName: \"kubernetes.io/projected/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-kube-api-access-tgrd7\") pod \"certified-operators-9bzzz\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.904963 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.956317 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663bba2d-3cb7-4ead-889f-062b9ffc9a61-utilities\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.956399 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663bba2d-3cb7-4ead-889f-062b9ffc9a61-catalog-content\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.956484 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cb6\" (UniqueName: \"kubernetes.io/projected/663bba2d-3cb7-4ead-889f-062b9ffc9a61-kube-api-access-d8cb6\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.957297 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663bba2d-3cb7-4ead-889f-062b9ffc9a61-utilities\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.958566 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663bba2d-3cb7-4ead-889f-062b9ffc9a61-catalog-content\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:49 crc kubenswrapper[5065]: I1008 13:22:49.976399 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cb6\" (UniqueName: \"kubernetes.io/projected/663bba2d-3cb7-4ead-889f-062b9ffc9a61-kube-api-access-d8cb6\") pod \"community-operators-twtsk\" (UID: \"663bba2d-3cb7-4ead-889f-062b9ffc9a61\") " pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:50 crc kubenswrapper[5065]: I1008 13:22:50.161002 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:22:50 crc kubenswrapper[5065]: I1008 13:22:50.200202 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxs2" event={"ID":"f3c95649-b562-43e6-ba51-25625b9df60f","Type":"ContainerStarted","Data":"4fba1e998831e4da19ca59c988149cc2640f5115b519ddc6e181d1ccac7a9205"} Oct 08 13:22:50 crc kubenswrapper[5065]: I1008 13:22:50.204808 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9jt2" event={"ID":"b23003ae-9c21-40d9-ad7c-f92806581aa9","Type":"ContainerDied","Data":"120b8f30029515927a3f5a96dd83998330dab58ffd64d5cfd4f3a0f89dbe85d8"} Oct 08 13:22:50 crc kubenswrapper[5065]: I1008 13:22:50.204651 5065 generic.go:334] "Generic (PLEG): container finished" podID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerID="120b8f30029515927a3f5a96dd83998330dab58ffd64d5cfd4f3a0f89dbe85d8" exitCode=0 Oct 08 13:22:50 crc kubenswrapper[5065]: I1008 13:22:50.220100 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wlxs2" podStartSLOduration=1.709277797 podStartE2EDuration="3.220080248s" podCreationTimestamp="2025-10-08 13:22:47 +0000 UTC" firstStartedPulling="2025-10-08 13:22:48.163694747 +0000 UTC m=+269.941076504" lastFinishedPulling="2025-10-08 13:22:49.674497198 +0000 UTC m=+271.451878955" observedRunningTime="2025-10-08 13:22:50.217705395 +0000 UTC m=+271.995087152" watchObservedRunningTime="2025-10-08 13:22:50.220080248 +0000 UTC m=+271.997461995" Oct 08 13:22:50 crc kubenswrapper[5065]: I1008 13:22:50.319363 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9bzzz"] Oct 08 13:22:50 crc kubenswrapper[5065]: I1008 13:22:50.621838 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twtsk"] Oct 08 13:22:50 crc kubenswrapper[5065]: W1008 13:22:50.626618 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod663bba2d_3cb7_4ead_889f_062b9ffc9a61.slice/crio-7b7649db4d40f54f88ee2cf22a92e9c1baf23bded6c28d952bebf8d2777ff4a8 WatchSource:0}: Error finding container 7b7649db4d40f54f88ee2cf22a92e9c1baf23bded6c28d952bebf8d2777ff4a8: Status 404 returned error can't find the container with id 7b7649db4d40f54f88ee2cf22a92e9c1baf23bded6c28d952bebf8d2777ff4a8 Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.211988 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9jt2" event={"ID":"b23003ae-9c21-40d9-ad7c-f92806581aa9","Type":"ContainerStarted","Data":"83d42e549bcc22c5de035294ae0b38f9adf8ce27003c7140c9af2beaf5578108"} Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.214034 5065 generic.go:334] "Generic (PLEG): container finished" podID="663bba2d-3cb7-4ead-889f-062b9ffc9a61" containerID="fe2f80d7d9e3cc71a7e3354fcce4b711ff5a6cb32bd7fd9e75ae9feeb8476ed7" exitCode=0 Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.214077 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twtsk" event={"ID":"663bba2d-3cb7-4ead-889f-062b9ffc9a61","Type":"ContainerDied","Data":"fe2f80d7d9e3cc71a7e3354fcce4b711ff5a6cb32bd7fd9e75ae9feeb8476ed7"} Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.214092 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twtsk" event={"ID":"663bba2d-3cb7-4ead-889f-062b9ffc9a61","Type":"ContainerStarted","Data":"7b7649db4d40f54f88ee2cf22a92e9c1baf23bded6c28d952bebf8d2777ff4a8"} Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.216061 5065 generic.go:334] "Generic (PLEG): container finished" podID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerID="2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57" exitCode=0 Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.216819 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bzzz" event={"ID":"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7","Type":"ContainerDied","Data":"2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57"} Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.216837 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bzzz" event={"ID":"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7","Type":"ContainerStarted","Data":"baa4545861c40bbc74a4a1d289cbb007fb45ffb080f3e2ef34106b70129541ec"} Oct 08 13:22:51 crc kubenswrapper[5065]: I1008 13:22:51.232475 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l9jt2" podStartSLOduration=1.7384945520000001 podStartE2EDuration="4.232456463s" podCreationTimestamp="2025-10-08 13:22:47 +0000 UTC" firstStartedPulling="2025-10-08 13:22:48.16279761 +0000 UTC m=+269.940179367" lastFinishedPulling="2025-10-08 13:22:50.656759521 +0000 UTC m=+272.434141278" observedRunningTime="2025-10-08 13:22:51.229572124 +0000 UTC m=+273.006953901" watchObservedRunningTime="2025-10-08 13:22:51.232456463 +0000 UTC m=+273.009838220" Oct 08 13:22:54 crc kubenswrapper[5065]: I1008 13:22:54.231109 5065 generic.go:334] "Generic (PLEG): container finished" podID="663bba2d-3cb7-4ead-889f-062b9ffc9a61" containerID="5b7acebb912ccb72177e850cb6b98e2e77e136293f7d904f9f9fe984c5c409c5" exitCode=0 Oct 08 13:22:54 crc kubenswrapper[5065]: I1008 13:22:54.231202 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twtsk" event={"ID":"663bba2d-3cb7-4ead-889f-062b9ffc9a61","Type":"ContainerDied","Data":"5b7acebb912ccb72177e850cb6b98e2e77e136293f7d904f9f9fe984c5c409c5"} Oct 08 13:22:54 crc kubenswrapper[5065]: I1008 13:22:54.236858 5065 generic.go:334] "Generic (PLEG): container finished" podID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerID="2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c" exitCode=0 Oct 08 13:22:54 crc kubenswrapper[5065]: I1008 13:22:54.236895 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bzzz" event={"ID":"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7","Type":"ContainerDied","Data":"2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c"} Oct 08 13:22:55 crc kubenswrapper[5065]: I1008 13:22:55.243211 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twtsk" event={"ID":"663bba2d-3cb7-4ead-889f-062b9ffc9a61","Type":"ContainerStarted","Data":"501c29af032903432fcba92e39b7b7c33b40faac0e99b7a39d7514a7b2ec7e5e"} Oct 08 13:22:55 crc kubenswrapper[5065]: I1008 13:22:55.246434 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bzzz" event={"ID":"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7","Type":"ContainerStarted","Data":"3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b"} Oct 08 13:22:55 crc kubenswrapper[5065]: I1008 13:22:55.304960 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-twtsk" podStartSLOduration=2.7958740989999997 podStartE2EDuration="6.304942369s" podCreationTimestamp="2025-10-08 13:22:49 +0000 UTC" firstStartedPulling="2025-10-08 13:22:51.216887186 +0000 UTC m=+272.994268943" lastFinishedPulling="2025-10-08 13:22:54.725955456 +0000 UTC m=+276.503337213" observedRunningTime="2025-10-08 13:22:55.277157568 +0000 UTC m=+277.054539325" watchObservedRunningTime="2025-10-08 13:22:55.304942369 +0000 UTC m=+277.082324126" Oct 08 13:22:55 crc kubenswrapper[5065]: I1008 13:22:55.305573 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9bzzz" podStartSLOduration=2.669506939 podStartE2EDuration="6.305567158s" podCreationTimestamp="2025-10-08 13:22:49 +0000 UTC" firstStartedPulling="2025-10-08 13:22:51.21733101 +0000 UTC m=+272.994712767" lastFinishedPulling="2025-10-08 13:22:54.853391229 +0000 UTC m=+276.630772986" observedRunningTime="2025-10-08 13:22:55.300530754 +0000 UTC m=+277.077912511" watchObservedRunningTime="2025-10-08 13:22:55.305567158 +0000 UTC m=+277.082948915" Oct 08 13:22:57 crc kubenswrapper[5065]: I1008 13:22:57.509698 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:57 crc kubenswrapper[5065]: I1008 13:22:57.511504 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:57 crc kubenswrapper[5065]: I1008 13:22:57.547505 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:57 crc kubenswrapper[5065]: I1008 13:22:57.715316 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:57 crc kubenswrapper[5065]: I1008 13:22:57.715370 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:57 crc kubenswrapper[5065]: I1008 13:22:57.756228 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:58 crc kubenswrapper[5065]: I1008 13:22:58.297931 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wlxs2" Oct 08 13:22:58 crc kubenswrapper[5065]: I1008 13:22:58.297986 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 13:22:59 crc kubenswrapper[5065]: I1008 13:22:59.905748 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:59 crc kubenswrapper[5065]: I1008 13:22:59.907051 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:22:59 crc kubenswrapper[5065]: I1008 13:22:59.946112 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:23:00 crc kubenswrapper[5065]: I1008 13:23:00.161656 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:23:00 crc kubenswrapper[5065]: I1008 13:23:00.161720 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:23:00 crc kubenswrapper[5065]: I1008 13:23:00.202702 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:23:00 crc kubenswrapper[5065]: I1008 13:23:00.309170 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-twtsk" Oct 08 13:23:00 crc kubenswrapper[5065]: I1008 13:23:00.313703 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:23:54 crc kubenswrapper[5065]: I1008 13:23:54.375547 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:23:54 crc kubenswrapper[5065]: I1008 13:23:54.376816 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:24:24 crc kubenswrapper[5065]: I1008 13:24:24.375883 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:24:24 crc kubenswrapper[5065]: I1008 13:24:24.377599 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.375034 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.375694 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.375744 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.376545 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b6ece119e94ac9da615f168a7039d14cba16573f0741f84acc41e64424ae388"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.376614 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://6b6ece119e94ac9da615f168a7039d14cba16573f0741f84acc41e64424ae388" gracePeriod=600 Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.862749 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="6b6ece119e94ac9da615f168a7039d14cba16573f0741f84acc41e64424ae388" exitCode=0 Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.863076 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"6b6ece119e94ac9da615f168a7039d14cba16573f0741f84acc41e64424ae388"} Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.863113 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"73ee35ce2597ab47414b6734db202d73211201b50d506090ee412556d4772970"} Oct 08 13:24:54 crc kubenswrapper[5065]: I1008 13:24:54.863137 5065 scope.go:117] "RemoveContainer" containerID="2a2433b571af7981a78b896b75ae739703cef6a7baf34bd44014707c02b9a53c" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.273749 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mvqsj"] Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.274912 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.294579 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mvqsj"] Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452570 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87661e0-2e91-4c50-9958-9402da0e1251-registry-certificates\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452625 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452653 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42kcq\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-kube-api-access-42kcq\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452716 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-bound-sa-token\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452751 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87661e0-2e91-4c50-9958-9402da0e1251-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452787 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-registry-tls\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452825 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87661e0-2e91-4c50-9958-9402da0e1251-trusted-ca\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.452875 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87661e0-2e91-4c50-9958-9402da0e1251-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.470978 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.553879 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-bound-sa-token\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.553945 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87661e0-2e91-4c50-9958-9402da0e1251-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.553998 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-registry-tls\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.554045 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87661e0-2e91-4c50-9958-9402da0e1251-trusted-ca\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.554103 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87661e0-2e91-4c50-9958-9402da0e1251-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.554193 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87661e0-2e91-4c50-9958-9402da0e1251-registry-certificates\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.554290 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42kcq\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-kube-api-access-42kcq\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.555993 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87661e0-2e91-4c50-9958-9402da0e1251-trusted-ca\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.555996 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87661e0-2e91-4c50-9958-9402da0e1251-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.556964 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87661e0-2e91-4c50-9958-9402da0e1251-registry-certificates\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.562273 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87661e0-2e91-4c50-9958-9402da0e1251-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.562373 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-registry-tls\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.582253 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42kcq\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-kube-api-access-42kcq\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.584812 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87661e0-2e91-4c50-9958-9402da0e1251-bound-sa-token\") pod \"image-registry-66df7c8f76-mvqsj\" (UID: \"f87661e0-2e91-4c50-9958-9402da0e1251\") " pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.591857 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:31 crc kubenswrapper[5065]: I1008 13:26:31.796592 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mvqsj"] Oct 08 13:26:32 crc kubenswrapper[5065]: I1008 13:26:32.406359 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" event={"ID":"f87661e0-2e91-4c50-9958-9402da0e1251","Type":"ContainerStarted","Data":"daa7ce0cbdd19a4a44af037818ae49515293b7f2d224146f56aa3f5bd6a108bd"} Oct 08 13:26:32 crc kubenswrapper[5065]: I1008 13:26:32.406434 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" event={"ID":"f87661e0-2e91-4c50-9958-9402da0e1251","Type":"ContainerStarted","Data":"53c7fc8cc70bea2f9a0c9cab1b6fe231d3bf42ee0d7849bb3170716a5676721e"} Oct 08 13:26:32 crc kubenswrapper[5065]: I1008 13:26:32.406598 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:32 crc kubenswrapper[5065]: I1008 13:26:32.427143 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" podStartSLOduration=1.427088568 podStartE2EDuration="1.427088568s" podCreationTimestamp="2025-10-08 13:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:26:32.425400778 +0000 UTC m=+494.202782575" watchObservedRunningTime="2025-10-08 13:26:32.427088568 +0000 UTC m=+494.204470325" Oct 08 13:26:51 crc kubenswrapper[5065]: I1008 13:26:51.597486 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mvqsj" Oct 08 13:26:51 crc kubenswrapper[5065]: I1008 13:26:51.657351 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nnvb5"] Oct 08 13:26:54 crc kubenswrapper[5065]: I1008 13:26:54.376231 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:26:54 crc kubenswrapper[5065]: I1008 13:26:54.376662 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:27:16 crc kubenswrapper[5065]: I1008 13:27:16.711923 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" podUID="0e2d2016-716c-4261-a1c0-5dbd804a65d8" containerName="registry" containerID="cri-o://53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1" gracePeriod=30 Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.033268 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.182725 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e2d2016-716c-4261-a1c0-5dbd804a65d8-installation-pull-secrets\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.182831 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-bound-sa-token\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.182923 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d66pg\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-kube-api-access-d66pg\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.182963 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-certificates\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.183141 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.183237 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-tls\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.183732 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e2d2016-716c-4261-a1c0-5dbd804a65d8-ca-trust-extracted\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.183793 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-trusted-ca\") pod \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\" (UID: \"0e2d2016-716c-4261-a1c0-5dbd804a65d8\") " Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.184711 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.184800 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.185289 5065 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.185333 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e2d2016-716c-4261-a1c0-5dbd804a65d8-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.190097 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.195729 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2d2016-716c-4261-a1c0-5dbd804a65d8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.196161 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.197515 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.199236 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2d2016-716c-4261-a1c0-5dbd804a65d8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.199634 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-kube-api-access-d66pg" (OuterVolumeSpecName: "kube-api-access-d66pg") pod "0e2d2016-716c-4261-a1c0-5dbd804a65d8" (UID: "0e2d2016-716c-4261-a1c0-5dbd804a65d8"). InnerVolumeSpecName "kube-api-access-d66pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.287277 5065 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.287354 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d66pg\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-kube-api-access-d66pg\") on node \"crc\" DevicePath \"\"" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.287386 5065 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0e2d2016-716c-4261-a1c0-5dbd804a65d8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.287445 5065 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0e2d2016-716c-4261-a1c0-5dbd804a65d8-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.287472 5065 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0e2d2016-716c-4261-a1c0-5dbd804a65d8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.675226 5065 generic.go:334] "Generic (PLEG): container finished" podID="0e2d2016-716c-4261-a1c0-5dbd804a65d8" containerID="53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1" exitCode=0 Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.675273 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" event={"ID":"0e2d2016-716c-4261-a1c0-5dbd804a65d8","Type":"ContainerDied","Data":"53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1"} Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.675306 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" event={"ID":"0e2d2016-716c-4261-a1c0-5dbd804a65d8","Type":"ContainerDied","Data":"f3e2504baaf28a95e0164c52a1c413ae2043765ae3431165a64f658184f6ab09"} Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.675324 5065 scope.go:117] "RemoveContainer" containerID="53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.675339 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nnvb5" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.695632 5065 scope.go:117] "RemoveContainer" containerID="53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1" Oct 08 13:27:17 crc kubenswrapper[5065]: E1008 13:27:17.696593 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1\": container with ID starting with 53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1 not found: ID does not exist" containerID="53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.696622 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1"} err="failed to get container status \"53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1\": rpc error: code = NotFound desc = could not find container \"53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1\": container with ID starting with 53fea4f5b42eeda5601fdf4ba5d7d3e9c779b7c5e3677f7dfb4891f2b1f9fca1 not found: ID does not exist" Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.711765 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nnvb5"] Oct 08 13:27:17 crc kubenswrapper[5065]: I1008 13:27:17.715196 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nnvb5"] Oct 08 13:27:18 crc kubenswrapper[5065]: I1008 13:27:18.885465 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e2d2016-716c-4261-a1c0-5dbd804a65d8" path="/var/lib/kubelet/pods/0e2d2016-716c-4261-a1c0-5dbd804a65d8/volumes" Oct 08 13:27:24 crc kubenswrapper[5065]: I1008 13:27:24.375802 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:27:24 crc kubenswrapper[5065]: I1008 13:27:24.376339 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.375777 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.376336 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.376376 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.376873 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73ee35ce2597ab47414b6734db202d73211201b50d506090ee412556d4772970"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.376925 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://73ee35ce2597ab47414b6734db202d73211201b50d506090ee412556d4772970" gracePeriod=600 Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.889753 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="73ee35ce2597ab47414b6734db202d73211201b50d506090ee412556d4772970" exitCode=0 Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.890977 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"73ee35ce2597ab47414b6734db202d73211201b50d506090ee412556d4772970"} Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.891019 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"03687d9c2628c1d5d874abdb932a1eb112aa1d5d672fca57fe617c3d9d4bd54c"} Oct 08 13:27:54 crc kubenswrapper[5065]: I1008 13:27:54.891038 5065 scope.go:117] "RemoveContainer" containerID="6b6ece119e94ac9da615f168a7039d14cba16573f0741f84acc41e64424ae388" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.542874 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hhd9n"] Oct 08 13:29:28 crc kubenswrapper[5065]: E1008 13:29:28.543629 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2d2016-716c-4261-a1c0-5dbd804a65d8" containerName="registry" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.543645 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2d2016-716c-4261-a1c0-5dbd804a65d8" containerName="registry" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.543764 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2d2016-716c-4261-a1c0-5dbd804a65d8" containerName="registry" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.544162 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.546648 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.546854 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.546947 5065 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bsn6p" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.546935 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.552519 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hhd9n"] Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.690852 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-crc-storage\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.691237 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdgh\" (UniqueName: \"kubernetes.io/projected/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-kube-api-access-shdgh\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.691270 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-node-mnt\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.791897 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-crc-storage\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.791969 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdgh\" (UniqueName: \"kubernetes.io/projected/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-kube-api-access-shdgh\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.791994 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-node-mnt\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.792285 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-node-mnt\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.793326 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-crc-storage\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.816988 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdgh\" (UniqueName: \"kubernetes.io/projected/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-kube-api-access-shdgh\") pod \"crc-storage-crc-hhd9n\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:28 crc kubenswrapper[5065]: I1008 13:29:28.866823 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:29 crc kubenswrapper[5065]: I1008 13:29:29.116834 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hhd9n"] Oct 08 13:29:29 crc kubenswrapper[5065]: I1008 13:29:29.128145 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 13:29:29 crc kubenswrapper[5065]: I1008 13:29:29.392232 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hhd9n" event={"ID":"d8e111c5-5a16-4675-a8eb-d1ae6a879e11","Type":"ContainerStarted","Data":"ee4a3f650772023604d8e4e900ec6b0c6f5dd5d2059b6f767db5ee46e47c153b"} Oct 08 13:29:31 crc kubenswrapper[5065]: I1008 13:29:31.405686 5065 generic.go:334] "Generic (PLEG): container finished" podID="d8e111c5-5a16-4675-a8eb-d1ae6a879e11" containerID="2e2f1283121aa17e65e981235cc5f1b1e6eec09949062844b8db4dfe9ff3f371" exitCode=0 Oct 08 13:29:31 crc kubenswrapper[5065]: I1008 13:29:31.405765 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hhd9n" event={"ID":"d8e111c5-5a16-4675-a8eb-d1ae6a879e11","Type":"ContainerDied","Data":"2e2f1283121aa17e65e981235cc5f1b1e6eec09949062844b8db4dfe9ff3f371"} Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.661145 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.755875 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-crc-storage\") pod \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.755974 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-node-mnt\") pod \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.756047 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shdgh\" (UniqueName: \"kubernetes.io/projected/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-kube-api-access-shdgh\") pod \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\" (UID: \"d8e111c5-5a16-4675-a8eb-d1ae6a879e11\") " Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.756096 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "d8e111c5-5a16-4675-a8eb-d1ae6a879e11" (UID: "d8e111c5-5a16-4675-a8eb-d1ae6a879e11"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.756350 5065 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.760880 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-kube-api-access-shdgh" (OuterVolumeSpecName: "kube-api-access-shdgh") pod "d8e111c5-5a16-4675-a8eb-d1ae6a879e11" (UID: "d8e111c5-5a16-4675-a8eb-d1ae6a879e11"). InnerVolumeSpecName "kube-api-access-shdgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.768640 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "d8e111c5-5a16-4675-a8eb-d1ae6a879e11" (UID: "d8e111c5-5a16-4675-a8eb-d1ae6a879e11"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.857386 5065 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:32 crc kubenswrapper[5065]: I1008 13:29:32.857459 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shdgh\" (UniqueName: \"kubernetes.io/projected/d8e111c5-5a16-4675-a8eb-d1ae6a879e11-kube-api-access-shdgh\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:33 crc kubenswrapper[5065]: I1008 13:29:33.421644 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hhd9n" event={"ID":"d8e111c5-5a16-4675-a8eb-d1ae6a879e11","Type":"ContainerDied","Data":"ee4a3f650772023604d8e4e900ec6b0c6f5dd5d2059b6f767db5ee46e47c153b"} Oct 08 13:29:33 crc kubenswrapper[5065]: I1008 13:29:33.421696 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hhd9n" Oct 08 13:29:33 crc kubenswrapper[5065]: I1008 13:29:33.421700 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee4a3f650772023604d8e4e900ec6b0c6f5dd5d2059b6f767db5ee46e47c153b" Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.667262 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96g69"] Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.668280 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-controller" containerID="cri-o://5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" gracePeriod=30 Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.668390 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="northd" containerID="cri-o://150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" gracePeriod=30 Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.668432 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" gracePeriod=30 Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.668484 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-node" containerID="cri-o://324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" gracePeriod=30 Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.668521 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-acl-logging" containerID="cri-o://1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" gracePeriod=30 Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.668757 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="sbdb" containerID="cri-o://154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" gracePeriod=30 Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.668793 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="nbdb" containerID="cri-o://ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" gracePeriod=30 Oct 08 13:29:39 crc kubenswrapper[5065]: I1008 13:29:39.711666 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" containerID="cri-o://8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" gracePeriod=30 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.002377 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/3.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.004448 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovn-acl-logging/0.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.004914 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovn-controller/0.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.005336 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050462 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f7sk4"] Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050647 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kubecfg-setup" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050657 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kubecfg-setup" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050670 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050676 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050684 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e111c5-5a16-4675-a8eb-d1ae6a879e11" containerName="storage" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050690 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e111c5-5a16-4675-a8eb-d1ae6a879e11" containerName="storage" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050698 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="sbdb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050703 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="sbdb" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050711 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050716 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050726 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-node" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050731 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-node" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050739 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050745 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050753 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050759 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050766 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-acl-logging" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050771 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-acl-logging" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050778 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050784 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050791 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="northd" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050796 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="northd" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050805 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="nbdb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050810 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="nbdb" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050816 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050821 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.050827 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050834 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050917 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050926 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050933 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050939 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovn-acl-logging" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050951 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e111c5-5a16-4675-a8eb-d1ae6a879e11" containerName="storage" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050958 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-node" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050965 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050972 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="sbdb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050978 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050985 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="northd" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.050992 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="nbdb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.051140 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.051149 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerName="ovnkube-controller" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.052594 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150372 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-systemd\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150715 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-var-lib-openvswitch\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150745 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-node-log\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150776 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-script-lib\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150798 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-openvswitch\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150819 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-systemd-units\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150836 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-ovn\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150909 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-netns\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150929 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-kubelet\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.150962 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-config\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151005 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-log-socket\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151029 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xftmm\" (UniqueName: \"kubernetes.io/projected/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-kube-api-access-xftmm\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151050 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-netd\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151068 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-slash\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151091 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-env-overrides\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151113 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-ovn-kubernetes\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151137 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-var-lib-cni-networks-ovn-kubernetes\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151158 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-etc-openvswitch\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151179 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-bin\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151211 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovn-node-metrics-cert\") pod \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\" (UID: \"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17\") " Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151356 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-node-log\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151387 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-ovn\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151466 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-slash\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151491 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-run-netns\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151510 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-var-lib-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151535 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-kubelet\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151558 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-log-socket\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151580 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-ovnkube-config\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151600 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5tzd\" (UniqueName: \"kubernetes.io/projected/774c9053-f445-45e4-aa5a-3ea4055068bd-kube-api-access-r5tzd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151635 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-cni-netd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151656 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-ovnkube-script-lib\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151681 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/774c9053-f445-45e4-aa5a-3ea4055068bd-ovn-node-metrics-cert\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151703 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-cni-bin\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151729 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-systemd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151754 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-etc-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151777 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151797 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-env-overrides\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151819 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151847 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-systemd-units\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151869 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151967 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.151996 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-node-log" (OuterVolumeSpecName: "node-log") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152383 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152435 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152462 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152510 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152540 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-slash" (OuterVolumeSpecName: "host-slash") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152572 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152592 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152610 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152610 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb"] Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152602 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152703 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152742 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152734 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152772 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-log-socket" (OuterVolumeSpecName: "log-socket") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.152811 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.153325 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.153729 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.155800 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.157267 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-kube-api-access-xftmm" (OuterVolumeSpecName: "kube-api-access-xftmm") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "kube-api-access-xftmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.157314 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.172759 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" (UID: "953c2ee2-f53f-4a77-8e47-2f7fc1aefc17"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.252921 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-ovn\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253012 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-run-netns\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253087 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-var-lib-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253133 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-slash\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253174 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-kubelet\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253220 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-log-socket\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253266 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-ovnkube-config\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253310 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5tzd\" (UniqueName: \"kubernetes.io/projected/774c9053-f445-45e4-aa5a-3ea4055068bd-kube-api-access-r5tzd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253378 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-cni-netd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253452 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-ovnkube-script-lib\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253528 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253541 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-kubelet\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253585 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/774c9053-f445-45e4-aa5a-3ea4055068bd-ovn-node-metrics-cert\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253610 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-slash\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253615 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-var-lib-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253638 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-log-socket\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253637 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-cni-bin\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253760 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-cni-bin\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253673 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-cni-netd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253680 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-ovn\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253829 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-systemd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253620 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-run-netns\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253884 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-systemd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253934 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-etc-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253969 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.253996 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8p6m\" (UniqueName: \"kubernetes.io/projected/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-kube-api-access-x8p6m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254016 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-env-overrides\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254034 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254022 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-etc-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254056 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254071 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254127 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-systemd-units\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254132 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254152 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254160 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-systemd-units\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254201 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-run-openvswitch\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254225 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-node-log\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254303 5065 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-log-socket\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254317 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xftmm\" (UniqueName: \"kubernetes.io/projected/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-kube-api-access-xftmm\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254329 5065 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254340 5065 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-slash\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254351 5065 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254352 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-ovnkube-script-lib\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254364 5065 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254376 5065 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254384 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-ovnkube-config\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254387 5065 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254451 5065 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254467 5065 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254478 5065 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254394 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/774c9053-f445-45e4-aa5a-3ea4055068bd-node-log\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254489 5065 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254551 5065 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-node-log\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254579 5065 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254606 5065 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254629 5065 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254654 5065 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254676 5065 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254699 5065 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254723 5065 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.254744 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/774c9053-f445-45e4-aa5a-3ea4055068bd-env-overrides\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.258604 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/774c9053-f445-45e4-aa5a-3ea4055068bd-ovn-node-metrics-cert\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.272348 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5tzd\" (UniqueName: \"kubernetes.io/projected/774c9053-f445-45e4-aa5a-3ea4055068bd-kube-api-access-r5tzd\") pod \"ovnkube-node-f7sk4\" (UID: \"774c9053-f445-45e4-aa5a-3ea4055068bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.356065 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.356140 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.356179 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8p6m\" (UniqueName: \"kubernetes.io/projected/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-kube-api-access-x8p6m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.356541 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.356635 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.366666 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.374175 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8p6m\" (UniqueName: \"kubernetes.io/projected/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-kube-api-access-x8p6m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.463268 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/2.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.464092 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/1.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.464162 5065 generic.go:334] "Generic (PLEG): container finished" podID="ddc2ce1c-bf76-4663-a2d6-e518ff7a4678" containerID="3fc3fa49d9469ddc9f0cf14a9709270dfe42e85b0357c77c10baa16acfeeb096" exitCode=2 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.464249 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerDied","Data":"3fc3fa49d9469ddc9f0cf14a9709270dfe42e85b0357c77c10baa16acfeeb096"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.464308 5065 scope.go:117] "RemoveContainer" containerID="bad714c0e33515688589117e54c6a54fdeb7c42bc8208661378db01033cb893b" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.464915 5065 scope.go:117] "RemoveContainer" containerID="3fc3fa49d9469ddc9f0cf14a9709270dfe42e85b0357c77c10baa16acfeeb096" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.465331 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dkvkk_openshift-multus(ddc2ce1c-bf76-4663-a2d6-e518ff7a4678)\"" pod="openshift-multus/multus-dkvkk" podUID="ddc2ce1c-bf76-4663-a2d6-e518ff7a4678" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.465966 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.469735 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovnkube-controller/3.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.474797 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovn-acl-logging/0.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.475386 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96g69_953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/ovn-controller/0.log" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.475928 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" exitCode=0 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.475957 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" exitCode=0 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.475966 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" exitCode=0 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.475975 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" exitCode=0 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.475987 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" exitCode=0 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.475996 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" exitCode=0 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476006 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" exitCode=143 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476015 5065 generic.go:334] "Generic (PLEG): container finished" podID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" containerID="5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" exitCode=143 Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476003 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476089 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476107 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476122 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476136 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476149 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476161 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476173 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476180 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476186 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476193 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476201 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476207 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476214 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476221 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476227 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476236 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476245 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476255 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476262 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476268 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476275 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476281 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476288 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476294 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476306 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476313 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476323 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476334 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476342 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476349 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476356 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476362 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476369 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476376 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476382 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476389 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476396 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476406 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" event={"ID":"953c2ee2-f53f-4a77-8e47-2f7fc1aefc17","Type":"ContainerDied","Data":"3d8ae8dae4bbfd436440942d2f844a7d842e9c0bbf66a6f7d1d62703e371bb55"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476431 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476441 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476449 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476456 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476462 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476471 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476477 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476484 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476491 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476511 5065 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.476757 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96g69" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.478152 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"4d9d8642c1b94e4e71ade037e2cd7a1c074917d0b64f21f1cf8cead474db11e7"} Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.508152 5065 scope.go:117] "RemoveContainer" containerID="8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.517527 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96g69"] Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.525004 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96g69"] Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.529253 5065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(897dd2510efd66373fa6a72f580846b5ee58a5c00abcee0c56b1f496132a24c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.529327 5065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(897dd2510efd66373fa6a72f580846b5ee58a5c00abcee0c56b1f496132a24c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.529433 5065 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(897dd2510efd66373fa6a72f580846b5ee58a5c00abcee0c56b1f496132a24c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.529505 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace(329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace(329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(897dd2510efd66373fa6a72f580846b5ee58a5c00abcee0c56b1f496132a24c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.576781 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.593391 5065 scope.go:117] "RemoveContainer" containerID="154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.607062 5065 scope.go:117] "RemoveContainer" containerID="ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.621726 5065 scope.go:117] "RemoveContainer" containerID="150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.636283 5065 scope.go:117] "RemoveContainer" containerID="5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.654193 5065 scope.go:117] "RemoveContainer" containerID="324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.672351 5065 scope.go:117] "RemoveContainer" containerID="1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.689316 5065 scope.go:117] "RemoveContainer" containerID="5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.707726 5065 scope.go:117] "RemoveContainer" containerID="d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.723471 5065 scope.go:117] "RemoveContainer" containerID="8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.724376 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": container with ID starting with 8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1 not found: ID does not exist" containerID="8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.724438 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} err="failed to get container status \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": rpc error: code = NotFound desc = could not find container \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": container with ID starting with 8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.724465 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.725152 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": container with ID starting with 4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f not found: ID does not exist" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.725201 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} err="failed to get container status \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": rpc error: code = NotFound desc = could not find container \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": container with ID starting with 4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.725217 5065 scope.go:117] "RemoveContainer" containerID="154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.725562 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": container with ID starting with 154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1 not found: ID does not exist" containerID="154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.725587 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} err="failed to get container status \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": rpc error: code = NotFound desc = could not find container \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": container with ID starting with 154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.725599 5065 scope.go:117] "RemoveContainer" containerID="ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.726168 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": container with ID starting with ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503 not found: ID does not exist" containerID="ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.726192 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} err="failed to get container status \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": rpc error: code = NotFound desc = could not find container \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": container with ID starting with ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.726206 5065 scope.go:117] "RemoveContainer" containerID="150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.727987 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": container with ID starting with 150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a not found: ID does not exist" containerID="150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.728082 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} err="failed to get container status \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": rpc error: code = NotFound desc = could not find container \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": container with ID starting with 150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.728126 5065 scope.go:117] "RemoveContainer" containerID="5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.728539 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": container with ID starting with 5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7 not found: ID does not exist" containerID="5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.728567 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} err="failed to get container status \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": rpc error: code = NotFound desc = could not find container \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": container with ID starting with 5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.728612 5065 scope.go:117] "RemoveContainer" containerID="324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.728860 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": container with ID starting with 324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab not found: ID does not exist" containerID="324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.728897 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} err="failed to get container status \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": rpc error: code = NotFound desc = could not find container \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": container with ID starting with 324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.728909 5065 scope.go:117] "RemoveContainer" containerID="1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.729242 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": container with ID starting with 1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99 not found: ID does not exist" containerID="1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.729276 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} err="failed to get container status \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": rpc error: code = NotFound desc = could not find container \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": container with ID starting with 1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.729288 5065 scope.go:117] "RemoveContainer" containerID="5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.729545 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": container with ID starting with 5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962 not found: ID does not exist" containerID="5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.729587 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} err="failed to get container status \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": rpc error: code = NotFound desc = could not find container \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": container with ID starting with 5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.729616 5065 scope.go:117] "RemoveContainer" containerID="d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af" Oct 08 13:29:40 crc kubenswrapper[5065]: E1008 13:29:40.729879 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": container with ID starting with d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af not found: ID does not exist" containerID="d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.729900 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} err="failed to get container status \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": rpc error: code = NotFound desc = could not find container \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": container with ID starting with d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.729915 5065 scope.go:117] "RemoveContainer" containerID="8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.730277 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} err="failed to get container status \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": rpc error: code = NotFound desc = could not find container \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": container with ID starting with 8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.730295 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.730689 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} err="failed to get container status \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": rpc error: code = NotFound desc = could not find container \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": container with ID starting with 4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.730710 5065 scope.go:117] "RemoveContainer" containerID="154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731036 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} err="failed to get container status \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": rpc error: code = NotFound desc = could not find container \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": container with ID starting with 154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731065 5065 scope.go:117] "RemoveContainer" containerID="ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731341 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} err="failed to get container status \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": rpc error: code = NotFound desc = could not find container \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": container with ID starting with ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731360 5065 scope.go:117] "RemoveContainer" containerID="150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731608 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} err="failed to get container status \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": rpc error: code = NotFound desc = could not find container \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": container with ID starting with 150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731630 5065 scope.go:117] "RemoveContainer" containerID="5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731856 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} err="failed to get container status \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": rpc error: code = NotFound desc = could not find container \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": container with ID starting with 5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.731875 5065 scope.go:117] "RemoveContainer" containerID="324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.732222 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} err="failed to get container status \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": rpc error: code = NotFound desc = could not find container \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": container with ID starting with 324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.732240 5065 scope.go:117] "RemoveContainer" containerID="1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.732660 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} err="failed to get container status \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": rpc error: code = NotFound desc = could not find container \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": container with ID starting with 1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.732707 5065 scope.go:117] "RemoveContainer" containerID="5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.733219 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} err="failed to get container status \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": rpc error: code = NotFound desc = could not find container \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": container with ID starting with 5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.733263 5065 scope.go:117] "RemoveContainer" containerID="d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.733588 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} err="failed to get container status \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": rpc error: code = NotFound desc = could not find container \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": container with ID starting with d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.733608 5065 scope.go:117] "RemoveContainer" containerID="8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.733869 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} err="failed to get container status \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": rpc error: code = NotFound desc = could not find container \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": container with ID starting with 8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.733896 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734180 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} err="failed to get container status \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": rpc error: code = NotFound desc = could not find container \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": container with ID starting with 4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734198 5065 scope.go:117] "RemoveContainer" containerID="154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734456 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} err="failed to get container status \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": rpc error: code = NotFound desc = could not find container \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": container with ID starting with 154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734476 5065 scope.go:117] "RemoveContainer" containerID="ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734708 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} err="failed to get container status \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": rpc error: code = NotFound desc = could not find container \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": container with ID starting with ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734744 5065 scope.go:117] "RemoveContainer" containerID="150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734924 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} err="failed to get container status \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": rpc error: code = NotFound desc = could not find container \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": container with ID starting with 150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.734950 5065 scope.go:117] "RemoveContainer" containerID="5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735152 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} err="failed to get container status \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": rpc error: code = NotFound desc = could not find container \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": container with ID starting with 5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735172 5065 scope.go:117] "RemoveContainer" containerID="324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735453 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} err="failed to get container status \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": rpc error: code = NotFound desc = could not find container \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": container with ID starting with 324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735470 5065 scope.go:117] "RemoveContainer" containerID="1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735639 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} err="failed to get container status \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": rpc error: code = NotFound desc = could not find container \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": container with ID starting with 1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735691 5065 scope.go:117] "RemoveContainer" containerID="5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735970 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} err="failed to get container status \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": rpc error: code = NotFound desc = could not find container \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": container with ID starting with 5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.735987 5065 scope.go:117] "RemoveContainer" containerID="d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.736215 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} err="failed to get container status \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": rpc error: code = NotFound desc = could not find container \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": container with ID starting with d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.736233 5065 scope.go:117] "RemoveContainer" containerID="8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.736554 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1"} err="failed to get container status \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": rpc error: code = NotFound desc = could not find container \"8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1\": container with ID starting with 8357a4e8109c2a0074d693839eac2e32e41f09753e23880d8ffe6f52b87faea1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.736587 5065 scope.go:117] "RemoveContainer" containerID="4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.736814 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f"} err="failed to get container status \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": rpc error: code = NotFound desc = could not find container \"4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f\": container with ID starting with 4611327b4860bcfecb38884b6f6ef99f6928a14beddbf43941724237b1f43d6f not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.736834 5065 scope.go:117] "RemoveContainer" containerID="154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737113 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1"} err="failed to get container status \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": rpc error: code = NotFound desc = could not find container \"154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1\": container with ID starting with 154d8505f15a90d2eb9f3c5950e637fe38828343e42526a7e6a73c69153547d1 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737130 5065 scope.go:117] "RemoveContainer" containerID="ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737352 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503"} err="failed to get container status \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": rpc error: code = NotFound desc = could not find container \"ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503\": container with ID starting with ba387b6dfdf6b9970a8794b78b1fd82b5f203dd222288fa0a6aa378ef2eec503 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737375 5065 scope.go:117] "RemoveContainer" containerID="150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737712 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a"} err="failed to get container status \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": rpc error: code = NotFound desc = could not find container \"150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a\": container with ID starting with 150e03f7f72c1b5e062f7fd5af3969b3e53d66e4d202825f99d91f60df2a7a9a not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737734 5065 scope.go:117] "RemoveContainer" containerID="5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737975 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7"} err="failed to get container status \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": rpc error: code = NotFound desc = could not find container \"5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7\": container with ID starting with 5207ae55658b9c35c3900c9f865174579934cbb5e95dcb5ca94e39caeb483ae7 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.737999 5065 scope.go:117] "RemoveContainer" containerID="324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.738225 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab"} err="failed to get container status \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": rpc error: code = NotFound desc = could not find container \"324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab\": container with ID starting with 324e448fc37bcbdf75da5ca0a3b5dbdbfa7e0debd692cc323a9ffb2c3cd063ab not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.738244 5065 scope.go:117] "RemoveContainer" containerID="1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.738552 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99"} err="failed to get container status \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": rpc error: code = NotFound desc = could not find container \"1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99\": container with ID starting with 1b8fe883b432bd72d6bf342213bf7852e11f1472b00eaacc3573b908ace75a99 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.738589 5065 scope.go:117] "RemoveContainer" containerID="5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.738831 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962"} err="failed to get container status \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": rpc error: code = NotFound desc = could not find container \"5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962\": container with ID starting with 5c93a0a287443b85c7368957e3c7d21c43880dd6be137e28885245ac4fc7e962 not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.738851 5065 scope.go:117] "RemoveContainer" containerID="d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.739069 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af"} err="failed to get container status \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": rpc error: code = NotFound desc = could not find container \"d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af\": container with ID starting with d475efd490f630c23f0a7c08bea77745bdda65dfb68933e2db4704f08bc976af not found: ID does not exist" Oct 08 13:29:40 crc kubenswrapper[5065]: I1008 13:29:40.880370 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953c2ee2-f53f-4a77-8e47-2f7fc1aefc17" path="/var/lib/kubelet/pods/953c2ee2-f53f-4a77-8e47-2f7fc1aefc17/volumes" Oct 08 13:29:41 crc kubenswrapper[5065]: I1008 13:29:41.484380 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/2.log" Oct 08 13:29:41 crc kubenswrapper[5065]: I1008 13:29:41.488856 5065 generic.go:334] "Generic (PLEG): container finished" podID="774c9053-f445-45e4-aa5a-3ea4055068bd" containerID="dab3134729a5933d870bfc9cd272d2d4e837b1eb056c4bb3b4c6fc95e35ac201" exitCode=0 Oct 08 13:29:41 crc kubenswrapper[5065]: I1008 13:29:41.488918 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerDied","Data":"dab3134729a5933d870bfc9cd272d2d4e837b1eb056c4bb3b4c6fc95e35ac201"} Oct 08 13:29:42 crc kubenswrapper[5065]: I1008 13:29:42.497579 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"9ce7b74449c345063da8504ccdf3c1ee4668c3be39c29e721ab63fb1c11b746f"} Oct 08 13:29:42 crc kubenswrapper[5065]: I1008 13:29:42.497918 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"fc2d10cee8a687e10d80c8bb3665bcb87d34368500a8d93be71a2e30d926972a"} Oct 08 13:29:42 crc kubenswrapper[5065]: I1008 13:29:42.497931 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"cac548a191a783cdfab24c1c96adf09005fce17a2881d27670312fa40060527e"} Oct 08 13:29:42 crc kubenswrapper[5065]: I1008 13:29:42.497940 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"72bfb20762a5a9507721916f5ff1712e27e432125bfcf278d19efcc61c8a2ac5"} Oct 08 13:29:42 crc kubenswrapper[5065]: I1008 13:29:42.497949 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"22605b742610efd0865074439a8c9585ca9beb8a026c72f9f38ad151c0a8f68e"} Oct 08 13:29:42 crc kubenswrapper[5065]: I1008 13:29:42.497957 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"8678ee179c0185d5911905b428e2475202b24fd396286a948dc11891a7d1b1c8"} Oct 08 13:29:44 crc kubenswrapper[5065]: I1008 13:29:44.512700 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"4ef6e404f5f055563cc60d751986436b95a87cd28f42fe461934db81de605e1a"} Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.379792 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb"] Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.380441 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.380835 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:47 crc kubenswrapper[5065]: E1008 13:29:47.401970 5065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(9c311e3304f57a8e811428c21117680e7593914f3ba39f54bb4911d711948abc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 13:29:47 crc kubenswrapper[5065]: E1008 13:29:47.402055 5065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(9c311e3304f57a8e811428c21117680e7593914f3ba39f54bb4911d711948abc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:47 crc kubenswrapper[5065]: E1008 13:29:47.402083 5065 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(9c311e3304f57a8e811428c21117680e7593914f3ba39f54bb4911d711948abc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:47 crc kubenswrapper[5065]: E1008 13:29:47.402152 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace(329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace(329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(9c311e3304f57a8e811428c21117680e7593914f3ba39f54bb4911d711948abc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.536512 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" event={"ID":"774c9053-f445-45e4-aa5a-3ea4055068bd","Type":"ContainerStarted","Data":"03eb5fa3244fbf520623152773ddcfc291dc520a5f48256ae75f9a1d149ef4a4"} Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.537506 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.537528 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.537568 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.586876 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.599897 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:29:47 crc kubenswrapper[5065]: I1008 13:29:47.620346 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" podStartSLOduration=7.620330246 podStartE2EDuration="7.620330246s" podCreationTimestamp="2025-10-08 13:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:29:47.583061583 +0000 UTC m=+689.360443340" watchObservedRunningTime="2025-10-08 13:29:47.620330246 +0000 UTC m=+689.397712023" Oct 08 13:29:50 crc kubenswrapper[5065]: I1008 13:29:50.873377 5065 scope.go:117] "RemoveContainer" containerID="3fc3fa49d9469ddc9f0cf14a9709270dfe42e85b0357c77c10baa16acfeeb096" Oct 08 13:29:50 crc kubenswrapper[5065]: E1008 13:29:50.874293 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dkvkk_openshift-multus(ddc2ce1c-bf76-4663-a2d6-e518ff7a4678)\"" pod="openshift-multus/multus-dkvkk" podUID="ddc2ce1c-bf76-4663-a2d6-e518ff7a4678" Oct 08 13:29:54 crc kubenswrapper[5065]: I1008 13:29:54.375316 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:29:54 crc kubenswrapper[5065]: I1008 13:29:54.375400 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:29:59 crc kubenswrapper[5065]: I1008 13:29:59.873600 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:59 crc kubenswrapper[5065]: I1008 13:29:59.874420 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:59 crc kubenswrapper[5065]: E1008 13:29:59.913463 5065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(3300e920900c32031f3c3b780dc0f9c9dd9d2d871a248c5a09e66dd330973479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 13:29:59 crc kubenswrapper[5065]: E1008 13:29:59.913908 5065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(3300e920900c32031f3c3b780dc0f9c9dd9d2d871a248c5a09e66dd330973479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:59 crc kubenswrapper[5065]: E1008 13:29:59.913939 5065 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(3300e920900c32031f3c3b780dc0f9c9dd9d2d871a248c5a09e66dd330973479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:29:59 crc kubenswrapper[5065]: E1008 13:29:59.914002 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace(329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace(329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_openshift-marketplace_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83_0(3300e920900c32031f3c3b780dc0f9c9dd9d2d871a248c5a09e66dd330973479): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.138392 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws"] Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.139278 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.141584 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.142168 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.152200 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws"] Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.307232 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dmf\" (UniqueName: \"kubernetes.io/projected/6b20789e-c4cd-4819-b966-b8897cc55b60-kube-api-access-87dmf\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.307283 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b20789e-c4cd-4819-b966-b8897cc55b60-secret-volume\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.307330 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b20789e-c4cd-4819-b966-b8897cc55b60-config-volume\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.408317 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b20789e-c4cd-4819-b966-b8897cc55b60-config-volume\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.408439 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dmf\" (UniqueName: \"kubernetes.io/projected/6b20789e-c4cd-4819-b966-b8897cc55b60-kube-api-access-87dmf\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.408501 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b20789e-c4cd-4819-b966-b8897cc55b60-secret-volume\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.409286 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b20789e-c4cd-4819-b966-b8897cc55b60-config-volume\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.414133 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b20789e-c4cd-4819-b966-b8897cc55b60-secret-volume\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.425110 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dmf\" (UniqueName: \"kubernetes.io/projected/6b20789e-c4cd-4819-b966-b8897cc55b60-kube-api-access-87dmf\") pod \"collect-profiles-29332170-sp7ws\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.468861 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.487933 5065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3fe340db727ce04b884e3bcc7483a540861a1b6b0a9e91b41a3788881d6dd86c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.487987 5065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3fe340db727ce04b884e3bcc7483a540861a1b6b0a9e91b41a3788881d6dd86c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.488011 5065 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3fe340db727ce04b884e3bcc7483a540861a1b6b0a9e91b41a3788881d6dd86c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.488055 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager(6b20789e-c4cd-4819-b966-b8897cc55b60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager(6b20789e-c4cd-4819-b966-b8897cc55b60)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3fe340db727ce04b884e3bcc7483a540861a1b6b0a9e91b41a3788881d6dd86c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" podUID="6b20789e-c4cd-4819-b966-b8897cc55b60" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.603972 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: I1008 13:30:00.604497 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.622117 5065 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3f5007e70a5ed90d4acb8bf6f6b23411ec8ffb236910bea398ad1e9fdc7870af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.622390 5065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3f5007e70a5ed90d4acb8bf6f6b23411ec8ffb236910bea398ad1e9fdc7870af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.622414 5065 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3f5007e70a5ed90d4acb8bf6f6b23411ec8ffb236910bea398ad1e9fdc7870af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:00 crc kubenswrapper[5065]: E1008 13:30:00.622477 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager(6b20789e-c4cd-4819-b966-b8897cc55b60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager(6b20789e-c4cd-4819-b966-b8897cc55b60)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29332170-sp7ws_openshift-operator-lifecycle-manager_6b20789e-c4cd-4819-b966-b8897cc55b60_0(3f5007e70a5ed90d4acb8bf6f6b23411ec8ffb236910bea398ad1e9fdc7870af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" podUID="6b20789e-c4cd-4819-b966-b8897cc55b60" Oct 08 13:30:01 crc kubenswrapper[5065]: I1008 13:30:01.873086 5065 scope.go:117] "RemoveContainer" containerID="3fc3fa49d9469ddc9f0cf14a9709270dfe42e85b0357c77c10baa16acfeeb096" Oct 08 13:30:02 crc kubenswrapper[5065]: I1008 13:30:02.615865 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dkvkk_ddc2ce1c-bf76-4663-a2d6-e518ff7a4678/kube-multus/2.log" Oct 08 13:30:02 crc kubenswrapper[5065]: I1008 13:30:02.616246 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dkvkk" event={"ID":"ddc2ce1c-bf76-4663-a2d6-e518ff7a4678","Type":"ContainerStarted","Data":"e0d39d5f4f9a4ca1ecaad028ca5703ff5b5c117430137e4b7d483542b2ffcdce"} Oct 08 13:30:10 crc kubenswrapper[5065]: I1008 13:30:10.390770 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7sk4" Oct 08 13:30:13 crc kubenswrapper[5065]: I1008 13:30:13.872862 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:13 crc kubenswrapper[5065]: I1008 13:30:13.873576 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:14 crc kubenswrapper[5065]: I1008 13:30:14.110592 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws"] Oct 08 13:30:14 crc kubenswrapper[5065]: I1008 13:30:14.686009 5065 generic.go:334] "Generic (PLEG): container finished" podID="6b20789e-c4cd-4819-b966-b8897cc55b60" containerID="f022e75bf11a900ae58837843328c4337993d3b8745834a8226ffb04939d3695" exitCode=0 Oct 08 13:30:14 crc kubenswrapper[5065]: I1008 13:30:14.686116 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" event={"ID":"6b20789e-c4cd-4819-b966-b8897cc55b60","Type":"ContainerDied","Data":"f022e75bf11a900ae58837843328c4337993d3b8745834a8226ffb04939d3695"} Oct 08 13:30:14 crc kubenswrapper[5065]: I1008 13:30:14.686906 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" event={"ID":"6b20789e-c4cd-4819-b966-b8897cc55b60","Type":"ContainerStarted","Data":"22df3665a3208c97014442dd2b932a20a78b035a3d400c9fd6b58db60f62d145"} Oct 08 13:30:14 crc kubenswrapper[5065]: I1008 13:30:14.873706 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:30:14 crc kubenswrapper[5065]: I1008 13:30:14.874906 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:30:15 crc kubenswrapper[5065]: I1008 13:30:15.081960 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb"] Oct 08 13:30:15 crc kubenswrapper[5065]: I1008 13:30:15.695162 5065 generic.go:334] "Generic (PLEG): container finished" podID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerID="5c16581db514920cb1169a4d925878122b6448a8c4f3f28d7e424b6df3c9e569" exitCode=0 Oct 08 13:30:15 crc kubenswrapper[5065]: I1008 13:30:15.695247 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" event={"ID":"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83","Type":"ContainerDied","Data":"5c16581db514920cb1169a4d925878122b6448a8c4f3f28d7e424b6df3c9e569"} Oct 08 13:30:15 crc kubenswrapper[5065]: I1008 13:30:15.695303 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" event={"ID":"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83","Type":"ContainerStarted","Data":"8ea1b863de6ccd106421a339d277e304df1dbeca820826cc0e3d3ee378f0b691"} Oct 08 13:30:15 crc kubenswrapper[5065]: I1008 13:30:15.948175 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.006810 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87dmf\" (UniqueName: \"kubernetes.io/projected/6b20789e-c4cd-4819-b966-b8897cc55b60-kube-api-access-87dmf\") pod \"6b20789e-c4cd-4819-b966-b8897cc55b60\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.006872 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b20789e-c4cd-4819-b966-b8897cc55b60-config-volume\") pod \"6b20789e-c4cd-4819-b966-b8897cc55b60\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.006942 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b20789e-c4cd-4819-b966-b8897cc55b60-secret-volume\") pod \"6b20789e-c4cd-4819-b966-b8897cc55b60\" (UID: \"6b20789e-c4cd-4819-b966-b8897cc55b60\") " Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.007951 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b20789e-c4cd-4819-b966-b8897cc55b60-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b20789e-c4cd-4819-b966-b8897cc55b60" (UID: "6b20789e-c4cd-4819-b966-b8897cc55b60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.012552 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b20789e-c4cd-4819-b966-b8897cc55b60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b20789e-c4cd-4819-b966-b8897cc55b60" (UID: "6b20789e-c4cd-4819-b966-b8897cc55b60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.012581 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b20789e-c4cd-4819-b966-b8897cc55b60-kube-api-access-87dmf" (OuterVolumeSpecName: "kube-api-access-87dmf") pod "6b20789e-c4cd-4819-b966-b8897cc55b60" (UID: "6b20789e-c4cd-4819-b966-b8897cc55b60"). InnerVolumeSpecName "kube-api-access-87dmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.108290 5065 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b20789e-c4cd-4819-b966-b8897cc55b60-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.108343 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87dmf\" (UniqueName: \"kubernetes.io/projected/6b20789e-c4cd-4819-b966-b8897cc55b60-kube-api-access-87dmf\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.108356 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b20789e-c4cd-4819-b966-b8897cc55b60-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.702631 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" event={"ID":"6b20789e-c4cd-4819-b966-b8897cc55b60","Type":"ContainerDied","Data":"22df3665a3208c97014442dd2b932a20a78b035a3d400c9fd6b58db60f62d145"} Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.702683 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22df3665a3208c97014442dd2b932a20a78b035a3d400c9fd6b58db60f62d145" Oct 08 13:30:16 crc kubenswrapper[5065]: I1008 13:30:16.702699 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws" Oct 08 13:30:17 crc kubenswrapper[5065]: I1008 13:30:17.710994 5065 generic.go:334] "Generic (PLEG): container finished" podID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerID="e344ec882dfd34f9f90bad2cf697ff89a02a0a439530695c95086a5df71424b2" exitCode=0 Oct 08 13:30:17 crc kubenswrapper[5065]: I1008 13:30:17.711143 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" event={"ID":"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83","Type":"ContainerDied","Data":"e344ec882dfd34f9f90bad2cf697ff89a02a0a439530695c95086a5df71424b2"} Oct 08 13:30:18 crc kubenswrapper[5065]: I1008 13:30:18.722637 5065 generic.go:334] "Generic (PLEG): container finished" podID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerID="18fa4020d9ec0e9dd915a3ee272f0267f44bc3f16c0a2b9925da03139820c30f" exitCode=0 Oct 08 13:30:18 crc kubenswrapper[5065]: I1008 13:30:18.722714 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" event={"ID":"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83","Type":"ContainerDied","Data":"18fa4020d9ec0e9dd915a3ee272f0267f44bc3f16c0a2b9925da03139820c30f"} Oct 08 13:30:19 crc kubenswrapper[5065]: I1008 13:30:19.939844 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:30:19 crc kubenswrapper[5065]: I1008 13:30:19.964291 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8p6m\" (UniqueName: \"kubernetes.io/projected/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-kube-api-access-x8p6m\") pod \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " Oct 08 13:30:19 crc kubenswrapper[5065]: I1008 13:30:19.964333 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-util\") pod \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " Oct 08 13:30:19 crc kubenswrapper[5065]: I1008 13:30:19.964392 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-bundle\") pod \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\" (UID: \"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83\") " Oct 08 13:30:19 crc kubenswrapper[5065]: I1008 13:30:19.965998 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-bundle" (OuterVolumeSpecName: "bundle") pod "329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" (UID: "329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:30:19 crc kubenswrapper[5065]: I1008 13:30:19.970572 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-kube-api-access-x8p6m" (OuterVolumeSpecName: "kube-api-access-x8p6m") pod "329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" (UID: "329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83"). InnerVolumeSpecName "kube-api-access-x8p6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:30:19 crc kubenswrapper[5065]: I1008 13:30:19.979359 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-util" (OuterVolumeSpecName: "util") pod "329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" (UID: "329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:30:20 crc kubenswrapper[5065]: I1008 13:30:20.065698 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8p6m\" (UniqueName: \"kubernetes.io/projected/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-kube-api-access-x8p6m\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:20 crc kubenswrapper[5065]: I1008 13:30:20.065725 5065 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-util\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:20 crc kubenswrapper[5065]: I1008 13:30:20.065735 5065 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:20 crc kubenswrapper[5065]: I1008 13:30:20.740654 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" event={"ID":"329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83","Type":"ContainerDied","Data":"8ea1b863de6ccd106421a339d277e304df1dbeca820826cc0e3d3ee378f0b691"} Oct 08 13:30:20 crc kubenswrapper[5065]: I1008 13:30:20.740717 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea1b863de6ccd106421a339d277e304df1dbeca820826cc0e3d3ee378f0b691" Oct 08 13:30:20 crc kubenswrapper[5065]: I1008 13:30:20.740760 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb" Oct 08 13:30:24 crc kubenswrapper[5065]: I1008 13:30:24.375107 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:30:24 crc kubenswrapper[5065]: I1008 13:30:24.375562 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.879704 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v"] Oct 08 13:30:26 crc kubenswrapper[5065]: E1008 13:30:26.880251 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b20789e-c4cd-4819-b966-b8897cc55b60" containerName="collect-profiles" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.880270 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b20789e-c4cd-4819-b966-b8897cc55b60" containerName="collect-profiles" Oct 08 13:30:26 crc kubenswrapper[5065]: E1008 13:30:26.880297 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerName="util" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.880308 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerName="util" Oct 08 13:30:26 crc kubenswrapper[5065]: E1008 13:30:26.880361 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerName="extract" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.880372 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerName="extract" Oct 08 13:30:26 crc kubenswrapper[5065]: E1008 13:30:26.880388 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerName="pull" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.880399 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerName="pull" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.880576 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83" containerName="extract" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.880602 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b20789e-c4cd-4819-b966-b8897cc55b60" containerName="collect-profiles" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.881130 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.884870 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.887044 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-h45pg" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.887067 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 08 13:30:26 crc kubenswrapper[5065]: I1008 13:30:26.900260 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v"] Oct 08 13:30:27 crc kubenswrapper[5065]: I1008 13:30:27.055122 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2l9z\" (UniqueName: \"kubernetes.io/projected/cdd97313-0ee9-4f04-9cc0-f017e4440a89-kube-api-access-l2l9z\") pod \"nmstate-operator-858ddd8f98-jgq8v\" (UID: \"cdd97313-0ee9-4f04-9cc0-f017e4440a89\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" Oct 08 13:30:27 crc kubenswrapper[5065]: I1008 13:30:27.155906 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2l9z\" (UniqueName: \"kubernetes.io/projected/cdd97313-0ee9-4f04-9cc0-f017e4440a89-kube-api-access-l2l9z\") pod \"nmstate-operator-858ddd8f98-jgq8v\" (UID: \"cdd97313-0ee9-4f04-9cc0-f017e4440a89\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" Oct 08 13:30:27 crc kubenswrapper[5065]: I1008 13:30:27.173925 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2l9z\" (UniqueName: \"kubernetes.io/projected/cdd97313-0ee9-4f04-9cc0-f017e4440a89-kube-api-access-l2l9z\") pod \"nmstate-operator-858ddd8f98-jgq8v\" (UID: \"cdd97313-0ee9-4f04-9cc0-f017e4440a89\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" Oct 08 13:30:27 crc kubenswrapper[5065]: I1008 13:30:27.197322 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" Oct 08 13:30:27 crc kubenswrapper[5065]: I1008 13:30:27.603198 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v"] Oct 08 13:30:27 crc kubenswrapper[5065]: W1008 13:30:27.620596 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdd97313_0ee9_4f04_9cc0_f017e4440a89.slice/crio-d7d983effc6d0948d084035f5f40659682a159c804beb058de59a39e7263a859 WatchSource:0}: Error finding container d7d983effc6d0948d084035f5f40659682a159c804beb058de59a39e7263a859: Status 404 returned error can't find the container with id d7d983effc6d0948d084035f5f40659682a159c804beb058de59a39e7263a859 Oct 08 13:30:27 crc kubenswrapper[5065]: I1008 13:30:27.787657 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" event={"ID":"cdd97313-0ee9-4f04-9cc0-f017e4440a89","Type":"ContainerStarted","Data":"d7d983effc6d0948d084035f5f40659682a159c804beb058de59a39e7263a859"} Oct 08 13:30:31 crc kubenswrapper[5065]: I1008 13:30:31.808684 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" event={"ID":"cdd97313-0ee9-4f04-9cc0-f017e4440a89","Type":"ContainerStarted","Data":"e7a57d65eeee0f37aac272ed7444d35ef1ac1e41697d3459a3edbca48dd4362f"} Oct 08 13:30:31 crc kubenswrapper[5065]: I1008 13:30:31.828556 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jgq8v" podStartSLOduration=2.435044108 podStartE2EDuration="5.828536998s" podCreationTimestamp="2025-10-08 13:30:26 +0000 UTC" firstStartedPulling="2025-10-08 13:30:27.622089313 +0000 UTC m=+729.399471070" lastFinishedPulling="2025-10-08 13:30:31.015582203 +0000 UTC m=+732.792963960" observedRunningTime="2025-10-08 13:30:31.824189142 +0000 UTC m=+733.601570919" watchObservedRunningTime="2025-10-08 13:30:31.828536998 +0000 UTC m=+733.605918765" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.740293 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9"] Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.741083 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.744061 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-smrkh" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.758093 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9"] Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.760786 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-s72n4"] Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.761668 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.763781 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj"] Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.764335 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.766985 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.787668 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj"] Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.891130 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z"] Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.895220 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.905840 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.905875 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.905920 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-v54wv" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.909786 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z"] Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937225 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dde1f866-08d3-4a05-875b-c11f61ec5ed9-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937269 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqpxt\" (UniqueName: \"kubernetes.io/projected/dde1f866-08d3-4a05-875b-c11f61ec5ed9-kube-api-access-qqpxt\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937305 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qrf\" (UniqueName: \"kubernetes.io/projected/71e6a10c-0de2-43a2-b18f-9610189ccc7d-kube-api-access-p4qrf\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937335 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m89xg\" (UniqueName: \"kubernetes.io/projected/42352040-d859-4004-8b46-a472048a2c0a-kube-api-access-m89xg\") pod \"nmstate-webhook-6cdbc54649-nqrvj\" (UID: \"42352040-d859-4004-8b46-a472048a2c0a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937463 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/42352040-d859-4004-8b46-a472048a2c0a-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-nqrvj\" (UID: \"42352040-d859-4004-8b46-a472048a2c0a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937527 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-ovs-socket\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937554 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f562x\" (UniqueName: \"kubernetes.io/projected/4c1ce008-ad03-4842-8e85-c214bb157619-kube-api-access-f562x\") pod \"nmstate-metrics-fdff9cb8d-m44l9\" (UID: \"4c1ce008-ad03-4842-8e85-c214bb157619\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937706 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-dbus-socket\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937738 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-nmstate-lock\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:32 crc kubenswrapper[5065]: I1008 13:30:32.937777 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde1f866-08d3-4a05-875b-c11f61ec5ed9-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038411 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4qrf\" (UniqueName: \"kubernetes.io/projected/71e6a10c-0de2-43a2-b18f-9610189ccc7d-kube-api-access-p4qrf\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038532 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m89xg\" (UniqueName: \"kubernetes.io/projected/42352040-d859-4004-8b46-a472048a2c0a-kube-api-access-m89xg\") pod \"nmstate-webhook-6cdbc54649-nqrvj\" (UID: \"42352040-d859-4004-8b46-a472048a2c0a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038585 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/42352040-d859-4004-8b46-a472048a2c0a-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-nqrvj\" (UID: \"42352040-d859-4004-8b46-a472048a2c0a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038616 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-ovs-socket\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038642 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f562x\" (UniqueName: \"kubernetes.io/projected/4c1ce008-ad03-4842-8e85-c214bb157619-kube-api-access-f562x\") pod \"nmstate-metrics-fdff9cb8d-m44l9\" (UID: \"4c1ce008-ad03-4842-8e85-c214bb157619\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038669 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-dbus-socket\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038706 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde1f866-08d3-4a05-875b-c11f61ec5ed9-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038738 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-nmstate-lock\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038781 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dde1f866-08d3-4a05-875b-c11f61ec5ed9-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.038810 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqpxt\" (UniqueName: \"kubernetes.io/projected/dde1f866-08d3-4a05-875b-c11f61ec5ed9-kube-api-access-qqpxt\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: E1008 13:30:33.039601 5065 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 08 13:30:33 crc kubenswrapper[5065]: E1008 13:30:33.039675 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42352040-d859-4004-8b46-a472048a2c0a-tls-key-pair podName:42352040-d859-4004-8b46-a472048a2c0a nodeName:}" failed. No retries permitted until 2025-10-08 13:30:33.539653378 +0000 UTC m=+735.317035145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/42352040-d859-4004-8b46-a472048a2c0a-tls-key-pair") pod "nmstate-webhook-6cdbc54649-nqrvj" (UID: "42352040-d859-4004-8b46-a472048a2c0a") : secret "openshift-nmstate-webhook" not found Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.039890 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-ovs-socket\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.040376 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-dbus-socket\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: E1008 13:30:33.040491 5065 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Oct 08 13:30:33 crc kubenswrapper[5065]: E1008 13:30:33.040538 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dde1f866-08d3-4a05-875b-c11f61ec5ed9-plugin-serving-cert podName:dde1f866-08d3-4a05-875b-c11f61ec5ed9 nodeName:}" failed. No retries permitted until 2025-10-08 13:30:33.540527444 +0000 UTC m=+735.317909221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/dde1f866-08d3-4a05-875b-c11f61ec5ed9-plugin-serving-cert") pod "nmstate-console-plugin-6b874cbd85-qmx4z" (UID: "dde1f866-08d3-4a05-875b-c11f61ec5ed9") : secret "plugin-serving-cert" not found Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.040568 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/71e6a10c-0de2-43a2-b18f-9610189ccc7d-nmstate-lock\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.041633 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dde1f866-08d3-4a05-875b-c11f61ec5ed9-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.061209 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4qrf\" (UniqueName: \"kubernetes.io/projected/71e6a10c-0de2-43a2-b18f-9610189ccc7d-kube-api-access-p4qrf\") pod \"nmstate-handler-s72n4\" (UID: \"71e6a10c-0de2-43a2-b18f-9610189ccc7d\") " pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.066565 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f562x\" (UniqueName: \"kubernetes.io/projected/4c1ce008-ad03-4842-8e85-c214bb157619-kube-api-access-f562x\") pod \"nmstate-metrics-fdff9cb8d-m44l9\" (UID: \"4c1ce008-ad03-4842-8e85-c214bb157619\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.074698 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m89xg\" (UniqueName: \"kubernetes.io/projected/42352040-d859-4004-8b46-a472048a2c0a-kube-api-access-m89xg\") pod \"nmstate-webhook-6cdbc54649-nqrvj\" (UID: \"42352040-d859-4004-8b46-a472048a2c0a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.077016 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqpxt\" (UniqueName: \"kubernetes.io/projected/dde1f866-08d3-4a05-875b-c11f61ec5ed9-kube-api-access-qqpxt\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.079316 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:33 crc kubenswrapper[5065]: W1008 13:30:33.099630 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71e6a10c_0de2_43a2_b18f_9610189ccc7d.slice/crio-c55dc8c8b4e0ed7cfc0b4f27876c4e44dbb4c2d56edf770d584a1a39f5d6107e WatchSource:0}: Error finding container c55dc8c8b4e0ed7cfc0b4f27876c4e44dbb4c2d56edf770d584a1a39f5d6107e: Status 404 returned error can't find the container with id c55dc8c8b4e0ed7cfc0b4f27876c4e44dbb4c2d56edf770d584a1a39f5d6107e Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.108640 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6454866fb7-6pxqh"] Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.109471 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.131846 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6454866fb7-6pxqh"] Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.242369 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgvj\" (UniqueName: \"kubernetes.io/projected/1ae906ab-6089-46aa-b436-40a2b33ff1b9-kube-api-access-2jgvj\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.242452 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-trusted-ca-bundle\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.242488 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-oauth-serving-cert\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.242573 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-service-ca\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.242595 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-oauth-config\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.242625 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-serving-cert\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.242655 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-config\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.343979 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgvj\" (UniqueName: \"kubernetes.io/projected/1ae906ab-6089-46aa-b436-40a2b33ff1b9-kube-api-access-2jgvj\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.344047 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-trusted-ca-bundle\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.344108 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-oauth-serving-cert\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.344188 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-service-ca\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.344214 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-oauth-config\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.344260 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-serving-cert\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.344305 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-config\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.345405 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-service-ca\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.345492 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-trusted-ca-bundle\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.346117 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-oauth-serving-cert\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.346948 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-config\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.348710 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-oauth-config\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.349384 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae906ab-6089-46aa-b436-40a2b33ff1b9-console-serving-cert\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.358649 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.365778 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgvj\" (UniqueName: \"kubernetes.io/projected/1ae906ab-6089-46aa-b436-40a2b33ff1b9-kube-api-access-2jgvj\") pod \"console-6454866fb7-6pxqh\" (UID: \"1ae906ab-6089-46aa-b436-40a2b33ff1b9\") " pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.441732 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.549732 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/42352040-d859-4004-8b46-a472048a2c0a-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-nqrvj\" (UID: \"42352040-d859-4004-8b46-a472048a2c0a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.549801 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde1f866-08d3-4a05-875b-c11f61ec5ed9-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.554375 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde1f866-08d3-4a05-875b-c11f61ec5ed9-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-qmx4z\" (UID: \"dde1f866-08d3-4a05-875b-c11f61ec5ed9\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.554455 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/42352040-d859-4004-8b46-a472048a2c0a-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-nqrvj\" (UID: \"42352040-d859-4004-8b46-a472048a2c0a\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.566846 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9"] Oct 08 13:30:33 crc kubenswrapper[5065]: W1008 13:30:33.574713 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1ce008_ad03_4842_8e85_c214bb157619.slice/crio-1f57e36685d8ac63a57cc60e29ae67865b52fe7501e25269bd3be61d12aaa0c0 WatchSource:0}: Error finding container 1f57e36685d8ac63a57cc60e29ae67865b52fe7501e25269bd3be61d12aaa0c0: Status 404 returned error can't find the container with id 1f57e36685d8ac63a57cc60e29ae67865b52fe7501e25269bd3be61d12aaa0c0 Oct 08 13:30:33 crc kubenswrapper[5065]: W1008 13:30:33.629178 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae906ab_6089_46aa_b436_40a2b33ff1b9.slice/crio-a98746f48d0f38ddb71231b623af429736c2182c016bcc59d3f70fba25d7d969 WatchSource:0}: Error finding container a98746f48d0f38ddb71231b623af429736c2182c016bcc59d3f70fba25d7d969: Status 404 returned error can't find the container with id a98746f48d0f38ddb71231b623af429736c2182c016bcc59d3f70fba25d7d969 Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.630060 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6454866fb7-6pxqh"] Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.690189 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.819016 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.828863 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6454866fb7-6pxqh" event={"ID":"1ae906ab-6089-46aa-b436-40a2b33ff1b9","Type":"ContainerStarted","Data":"1624fe8ee6ade83afc6da94db5c567358668c846284a41eea08ddc3d1a80d2f7"} Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.828908 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6454866fb7-6pxqh" event={"ID":"1ae906ab-6089-46aa-b436-40a2b33ff1b9","Type":"ContainerStarted","Data":"a98746f48d0f38ddb71231b623af429736c2182c016bcc59d3f70fba25d7d969"} Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.831146 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" event={"ID":"4c1ce008-ad03-4842-8e85-c214bb157619","Type":"ContainerStarted","Data":"1f57e36685d8ac63a57cc60e29ae67865b52fe7501e25269bd3be61d12aaa0c0"} Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.832494 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s72n4" event={"ID":"71e6a10c-0de2-43a2-b18f-9610189ccc7d","Type":"ContainerStarted","Data":"c55dc8c8b4e0ed7cfc0b4f27876c4e44dbb4c2d56edf770d584a1a39f5d6107e"} Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.849068 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6454866fb7-6pxqh" podStartSLOduration=0.84904718 podStartE2EDuration="849.04718ms" podCreationTimestamp="2025-10-08 13:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:30:33.848839584 +0000 UTC m=+735.626221351" watchObservedRunningTime="2025-10-08 13:30:33.84904718 +0000 UTC m=+735.626428927" Oct 08 13:30:33 crc kubenswrapper[5065]: I1008 13:30:33.878826 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj"] Oct 08 13:30:34 crc kubenswrapper[5065]: I1008 13:30:34.206112 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z"] Oct 08 13:30:34 crc kubenswrapper[5065]: W1008 13:30:34.216080 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde1f866_08d3_4a05_875b_c11f61ec5ed9.slice/crio-35f1041fa3583dd1049f18fde1dceb73cb3f904e6ebe8f90b2dd4c121b2ec9b8 WatchSource:0}: Error finding container 35f1041fa3583dd1049f18fde1dceb73cb3f904e6ebe8f90b2dd4c121b2ec9b8: Status 404 returned error can't find the container with id 35f1041fa3583dd1049f18fde1dceb73cb3f904e6ebe8f90b2dd4c121b2ec9b8 Oct 08 13:30:34 crc kubenswrapper[5065]: I1008 13:30:34.840743 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" event={"ID":"42352040-d859-4004-8b46-a472048a2c0a","Type":"ContainerStarted","Data":"7feecc5f3284662e3a07ffbc87af6d129ae96d293798048aa652d6b88fbf0622"} Oct 08 13:30:34 crc kubenswrapper[5065]: I1008 13:30:34.843687 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" event={"ID":"dde1f866-08d3-4a05-875b-c11f61ec5ed9","Type":"ContainerStarted","Data":"35f1041fa3583dd1049f18fde1dceb73cb3f904e6ebe8f90b2dd4c121b2ec9b8"} Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.855047 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" event={"ID":"dde1f866-08d3-4a05-875b-c11f61ec5ed9","Type":"ContainerStarted","Data":"c4dc5b03d4666e15d93c342e0041a1710cfb21bebec46dacd716a78dbf608afd"} Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.859442 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" event={"ID":"4c1ce008-ad03-4842-8e85-c214bb157619","Type":"ContainerStarted","Data":"8b31140880df698382d0b9666d4032db56902e8a8a25df65a1f7faeb50bf1d75"} Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.862236 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s72n4" event={"ID":"71e6a10c-0de2-43a2-b18f-9610189ccc7d","Type":"ContainerStarted","Data":"0a0b563a3bb16e4e0fb8c14456aadec361c60393ddd52afede7b9329c4a52032"} Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.864478 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" event={"ID":"42352040-d859-4004-8b46-a472048a2c0a","Type":"ContainerStarted","Data":"eda7e3c937d0d7059f837b9deebe7c79d61bf467372327796ba0a10360ac1ab0"} Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.864650 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.872533 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-qmx4z" podStartSLOduration=2.582018974 podStartE2EDuration="4.872511344s" podCreationTimestamp="2025-10-08 13:30:32 +0000 UTC" firstStartedPulling="2025-10-08 13:30:34.219965191 +0000 UTC m=+735.997346948" lastFinishedPulling="2025-10-08 13:30:36.510457561 +0000 UTC m=+738.287839318" observedRunningTime="2025-10-08 13:30:36.866614092 +0000 UTC m=+738.643995849" watchObservedRunningTime="2025-10-08 13:30:36.872511344 +0000 UTC m=+738.649893101" Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.921803 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-s72n4" podStartSLOduration=1.515643211 podStartE2EDuration="4.92178291s" podCreationTimestamp="2025-10-08 13:30:32 +0000 UTC" firstStartedPulling="2025-10-08 13:30:33.101852161 +0000 UTC m=+734.879233918" lastFinishedPulling="2025-10-08 13:30:36.50799186 +0000 UTC m=+738.285373617" observedRunningTime="2025-10-08 13:30:36.917228588 +0000 UTC m=+738.694610365" watchObservedRunningTime="2025-10-08 13:30:36.92178291 +0000 UTC m=+738.699164677" Oct 08 13:30:36 crc kubenswrapper[5065]: I1008 13:30:36.935744 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" podStartSLOduration=2.306107262 podStartE2EDuration="4.935723557s" podCreationTimestamp="2025-10-08 13:30:32 +0000 UTC" firstStartedPulling="2025-10-08 13:30:33.904040393 +0000 UTC m=+735.681422150" lastFinishedPulling="2025-10-08 13:30:36.533656668 +0000 UTC m=+738.311038445" observedRunningTime="2025-10-08 13:30:36.929835195 +0000 UTC m=+738.707216972" watchObservedRunningTime="2025-10-08 13:30:36.935723557 +0000 UTC m=+738.713105314" Oct 08 13:30:37 crc kubenswrapper[5065]: I1008 13:30:37.870311 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:38 crc kubenswrapper[5065]: I1008 13:30:38.883612 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" event={"ID":"4c1ce008-ad03-4842-8e85-c214bb157619","Type":"ContainerStarted","Data":"719257e9fdcfca877b9cc8433f8089e798d11df3cb0692ce0b0bc2d5221de948"} Oct 08 13:30:38 crc kubenswrapper[5065]: I1008 13:30:38.914595 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-m44l9" podStartSLOduration=1.8156225350000001 podStartE2EDuration="6.914575254s" podCreationTimestamp="2025-10-08 13:30:32 +0000 UTC" firstStartedPulling="2025-10-08 13:30:33.577126264 +0000 UTC m=+735.354508021" lastFinishedPulling="2025-10-08 13:30:38.676078983 +0000 UTC m=+740.453460740" observedRunningTime="2025-10-08 13:30:38.913141913 +0000 UTC m=+740.690523680" watchObservedRunningTime="2025-10-08 13:30:38.914575254 +0000 UTC m=+740.691957011" Oct 08 13:30:43 crc kubenswrapper[5065]: I1008 13:30:43.116618 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-s72n4" Oct 08 13:30:43 crc kubenswrapper[5065]: I1008 13:30:43.442923 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:43 crc kubenswrapper[5065]: I1008 13:30:43.443001 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:43 crc kubenswrapper[5065]: I1008 13:30:43.447908 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:43 crc kubenswrapper[5065]: I1008 13:30:43.915244 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6454866fb7-6pxqh" Oct 08 13:30:44 crc kubenswrapper[5065]: I1008 13:30:44.007236 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w27qr"] Oct 08 13:30:46 crc kubenswrapper[5065]: I1008 13:30:46.956339 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fplnt"] Oct 08 13:30:46 crc kubenswrapper[5065]: I1008 13:30:46.956835 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" podUID="09664f6d-52dd-48af-b1ad-d19e58094ecc" containerName="controller-manager" containerID="cri-o://e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f" gracePeriod=30 Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.051660 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh"] Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.051854 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" podUID="dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" containerName="route-controller-manager" containerID="cri-o://135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877" gracePeriod=30 Oct 08 13:30:47 crc kubenswrapper[5065]: E1008 13:30:47.053385 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09664f6d_52dd_48af_b1ad_d19e58094ecc.slice/crio-conmon-e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f.scope\": RecentStats: unable to find data in memory cache]" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.275349 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.398662 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.444159 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09664f6d-52dd-48af-b1ad-d19e58094ecc-serving-cert\") pod \"09664f6d-52dd-48af-b1ad-d19e58094ecc\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.444215 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb6jm\" (UniqueName: \"kubernetes.io/projected/09664f6d-52dd-48af-b1ad-d19e58094ecc-kube-api-access-bb6jm\") pod \"09664f6d-52dd-48af-b1ad-d19e58094ecc\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.444241 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-client-ca\") pod \"09664f6d-52dd-48af-b1ad-d19e58094ecc\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.444331 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-proxy-ca-bundles\") pod \"09664f6d-52dd-48af-b1ad-d19e58094ecc\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.444360 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-client-ca\") pod \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.444376 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-config\") pod \"09664f6d-52dd-48af-b1ad-d19e58094ecc\" (UID: \"09664f6d-52dd-48af-b1ad-d19e58094ecc\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.444963 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-client-ca" (OuterVolumeSpecName: "client-ca") pod "09664f6d-52dd-48af-b1ad-d19e58094ecc" (UID: "09664f6d-52dd-48af-b1ad-d19e58094ecc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.445027 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-config" (OuterVolumeSpecName: "config") pod "09664f6d-52dd-48af-b1ad-d19e58094ecc" (UID: "09664f6d-52dd-48af-b1ad-d19e58094ecc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.445037 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" (UID: "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.445040 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "09664f6d-52dd-48af-b1ad-d19e58094ecc" (UID: "09664f6d-52dd-48af-b1ad-d19e58094ecc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.449882 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09664f6d-52dd-48af-b1ad-d19e58094ecc-kube-api-access-bb6jm" (OuterVolumeSpecName: "kube-api-access-bb6jm") pod "09664f6d-52dd-48af-b1ad-d19e58094ecc" (UID: "09664f6d-52dd-48af-b1ad-d19e58094ecc"). InnerVolumeSpecName "kube-api-access-bb6jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.451668 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09664f6d-52dd-48af-b1ad-d19e58094ecc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09664f6d-52dd-48af-b1ad-d19e58094ecc" (UID: "09664f6d-52dd-48af-b1ad-d19e58094ecc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.544825 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-config\") pod \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.544942 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62kqv\" (UniqueName: \"kubernetes.io/projected/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-kube-api-access-62kqv\") pod \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.544972 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-serving-cert\") pod \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\" (UID: \"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3\") " Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.545117 5065 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.545130 5065 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.545139 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.545147 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09664f6d-52dd-48af-b1ad-d19e58094ecc-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.545155 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb6jm\" (UniqueName: \"kubernetes.io/projected/09664f6d-52dd-48af-b1ad-d19e58094ecc-kube-api-access-bb6jm\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.545164 5065 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09664f6d-52dd-48af-b1ad-d19e58094ecc-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.545462 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-config" (OuterVolumeSpecName: "config") pod "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" (UID: "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.548288 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" (UID: "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.548471 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-kube-api-access-62kqv" (OuterVolumeSpecName: "kube-api-access-62kqv") pod "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" (UID: "dd72b69e-5d4d-44ae-86ec-00f5b52c49a3"). InnerVolumeSpecName "kube-api-access-62kqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.647502 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62kqv\" (UniqueName: \"kubernetes.io/projected/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-kube-api-access-62kqv\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.647561 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.647581 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.932446 5065 generic.go:334] "Generic (PLEG): container finished" podID="09664f6d-52dd-48af-b1ad-d19e58094ecc" containerID="e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f" exitCode=0 Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.932516 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.932501 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" event={"ID":"09664f6d-52dd-48af-b1ad-d19e58094ecc","Type":"ContainerDied","Data":"e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f"} Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.932891 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fplnt" event={"ID":"09664f6d-52dd-48af-b1ad-d19e58094ecc","Type":"ContainerDied","Data":"2ecb73939ca27f2cd04627979e05b7f458ec77ffc015f0c25bc890b124baa7e6"} Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.932917 5065 scope.go:117] "RemoveContainer" containerID="e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.937600 5065 generic.go:334] "Generic (PLEG): container finished" podID="dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" containerID="135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877" exitCode=0 Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.937652 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" event={"ID":"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3","Type":"ContainerDied","Data":"135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877"} Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.937663 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.937770 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh" event={"ID":"dd72b69e-5d4d-44ae-86ec-00f5b52c49a3","Type":"ContainerDied","Data":"fafbd122a7e3a2b640d062801f79abb7b623ece880f5c36f41fba8b2ae8f1606"} Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.976436 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh"] Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.985042 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r4tnh"] Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.985958 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fplnt"] Oct 08 13:30:47 crc kubenswrapper[5065]: I1008 13:30:47.989490 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fplnt"] Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.009429 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-875488fcf-gf2t7"] Oct 08 13:30:48 crc kubenswrapper[5065]: E1008 13:30:48.009616 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09664f6d-52dd-48af-b1ad-d19e58094ecc" containerName="controller-manager" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.009627 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="09664f6d-52dd-48af-b1ad-d19e58094ecc" containerName="controller-manager" Oct 08 13:30:48 crc kubenswrapper[5065]: E1008 13:30:48.009645 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" containerName="route-controller-manager" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.009651 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" containerName="route-controller-manager" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.009734 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="09664f6d-52dd-48af-b1ad-d19e58094ecc" containerName="controller-manager" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.009749 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" containerName="route-controller-manager" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.010075 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.012975 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.013165 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.013351 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.013569 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.013653 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.013697 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.019912 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.023189 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-875488fcf-gf2t7"] Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.125239 5065 scope.go:117] "RemoveContainer" containerID="e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f" Oct 08 13:30:48 crc kubenswrapper[5065]: E1008 13:30:48.126536 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f\": container with ID starting with e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f not found: ID does not exist" containerID="e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.126568 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f"} err="failed to get container status \"e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f\": rpc error: code = NotFound desc = could not find container \"e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f\": container with ID starting with e621b4ba643ebbdcc9d277fcf761c0c482045a58f43e236ceaf92290839c4f8f not found: ID does not exist" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.126618 5065 scope.go:117] "RemoveContainer" containerID="135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.127959 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-875488fcf-gf2t7"] Oct 08 13:30:48 crc kubenswrapper[5065]: E1008 13:30:48.128375 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-8vdqd proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" podUID="4b98751d-7cc7-4575-9d04-e57e18b6ae31" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.138326 5065 scope.go:117] "RemoveContainer" containerID="135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877" Oct 08 13:30:48 crc kubenswrapper[5065]: E1008 13:30:48.139834 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877\": container with ID starting with 135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877 not found: ID does not exist" containerID="135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.139870 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877"} err="failed to get container status \"135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877\": rpc error: code = NotFound desc = could not find container \"135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877\": container with ID starting with 135842ba36e1287c50ed2fd84fa736844ba0cce2875f13cd41b0371ee09c2877 not found: ID does not exist" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.154189 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-config\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.154258 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-client-ca\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.154289 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b98751d-7cc7-4575-9d04-e57e18b6ae31-serving-cert\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.154314 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdqd\" (UniqueName: \"kubernetes.io/projected/4b98751d-7cc7-4575-9d04-e57e18b6ae31-kube-api-access-8vdqd\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.154388 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-proxy-ca-bundles\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.166384 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm"] Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.167235 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.168957 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.169129 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.169146 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.169464 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.169550 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.169751 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.177983 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm"] Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255034 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b98751d-7cc7-4575-9d04-e57e18b6ae31-serving-cert\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255074 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vdqd\" (UniqueName: \"kubernetes.io/projected/4b98751d-7cc7-4575-9d04-e57e18b6ae31-kube-api-access-8vdqd\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255106 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdb0961-7dab-41b2-ac9e-8aea79a81721-config\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255139 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-proxy-ca-bundles\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255157 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cdb0961-7dab-41b2-ac9e-8aea79a81721-client-ca\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255214 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdb0961-7dab-41b2-ac9e-8aea79a81721-serving-cert\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255258 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-config\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255281 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxtdv\" (UniqueName: \"kubernetes.io/projected/1cdb0961-7dab-41b2-ac9e-8aea79a81721-kube-api-access-nxtdv\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.255302 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-client-ca\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.256210 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-client-ca\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.256401 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-config\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.256598 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-proxy-ca-bundles\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.260040 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b98751d-7cc7-4575-9d04-e57e18b6ae31-serving-cert\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.276539 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vdqd\" (UniqueName: \"kubernetes.io/projected/4b98751d-7cc7-4575-9d04-e57e18b6ae31-kube-api-access-8vdqd\") pod \"controller-manager-875488fcf-gf2t7\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.356280 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxtdv\" (UniqueName: \"kubernetes.io/projected/1cdb0961-7dab-41b2-ac9e-8aea79a81721-kube-api-access-nxtdv\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.356531 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdb0961-7dab-41b2-ac9e-8aea79a81721-config\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.356666 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cdb0961-7dab-41b2-ac9e-8aea79a81721-client-ca\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.356773 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdb0961-7dab-41b2-ac9e-8aea79a81721-serving-cert\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.357619 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdb0961-7dab-41b2-ac9e-8aea79a81721-config\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.357627 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cdb0961-7dab-41b2-ac9e-8aea79a81721-client-ca\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.361120 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdb0961-7dab-41b2-ac9e-8aea79a81721-serving-cert\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.380592 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxtdv\" (UniqueName: \"kubernetes.io/projected/1cdb0961-7dab-41b2-ac9e-8aea79a81721-kube-api-access-nxtdv\") pod \"route-controller-manager-8b69b8884-lrnlm\" (UID: \"1cdb0961-7dab-41b2-ac9e-8aea79a81721\") " pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.482249 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.879332 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09664f6d-52dd-48af-b1ad-d19e58094ecc" path="/var/lib/kubelet/pods/09664f6d-52dd-48af-b1ad-d19e58094ecc/volumes" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.880073 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd72b69e-5d4d-44ae-86ec-00f5b52c49a3" path="/var/lib/kubelet/pods/dd72b69e-5d4d-44ae-86ec-00f5b52c49a3/volumes" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.929242 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm"] Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.946221 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" event={"ID":"1cdb0961-7dab-41b2-ac9e-8aea79a81721","Type":"ContainerStarted","Data":"9e3e3f75fb990079e271c5c6eb5c3ad7da20638fe457b079b23fe570b6ddfc6d"} Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.948704 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:48 crc kubenswrapper[5065]: I1008 13:30:48.968184 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.065997 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-config\") pod \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.066049 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b98751d-7cc7-4575-9d04-e57e18b6ae31-serving-cert\") pod \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.066101 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-proxy-ca-bundles\") pod \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.066148 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-client-ca\") pod \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.066174 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vdqd\" (UniqueName: \"kubernetes.io/projected/4b98751d-7cc7-4575-9d04-e57e18b6ae31-kube-api-access-8vdqd\") pod \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\" (UID: \"4b98751d-7cc7-4575-9d04-e57e18b6ae31\") " Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.066813 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-client-ca" (OuterVolumeSpecName: "client-ca") pod "4b98751d-7cc7-4575-9d04-e57e18b6ae31" (UID: "4b98751d-7cc7-4575-9d04-e57e18b6ae31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.066885 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4b98751d-7cc7-4575-9d04-e57e18b6ae31" (UID: "4b98751d-7cc7-4575-9d04-e57e18b6ae31"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.070777 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b98751d-7cc7-4575-9d04-e57e18b6ae31-kube-api-access-8vdqd" (OuterVolumeSpecName: "kube-api-access-8vdqd") pod "4b98751d-7cc7-4575-9d04-e57e18b6ae31" (UID: "4b98751d-7cc7-4575-9d04-e57e18b6ae31"). InnerVolumeSpecName "kube-api-access-8vdqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.070991 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b98751d-7cc7-4575-9d04-e57e18b6ae31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4b98751d-7cc7-4575-9d04-e57e18b6ae31" (UID: "4b98751d-7cc7-4575-9d04-e57e18b6ae31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.071624 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-config" (OuterVolumeSpecName: "config") pod "4b98751d-7cc7-4575-9d04-e57e18b6ae31" (UID: "4b98751d-7cc7-4575-9d04-e57e18b6ae31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.166875 5065 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.166911 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vdqd\" (UniqueName: \"kubernetes.io/projected/4b98751d-7cc7-4575-9d04-e57e18b6ae31-kube-api-access-8vdqd\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.166925 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.166937 5065 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b98751d-7cc7-4575-9d04-e57e18b6ae31-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.166948 5065 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b98751d-7cc7-4575-9d04-e57e18b6ae31-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.955181 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-875488fcf-gf2t7" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.955193 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" event={"ID":"1cdb0961-7dab-41b2-ac9e-8aea79a81721","Type":"ContainerStarted","Data":"b159c5451a45eccb8ed7ee225849077c996b0332dd0881e496795fc5181c6432"} Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.955723 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.959577 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.970655 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8b69b8884-lrnlm" podStartSLOduration=1.970641934 podStartE2EDuration="1.970641934s" podCreationTimestamp="2025-10-08 13:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:30:49.968886502 +0000 UTC m=+751.746268259" watchObservedRunningTime="2025-10-08 13:30:49.970641934 +0000 UTC m=+751.748023691" Oct 08 13:30:49 crc kubenswrapper[5065]: I1008 13:30:49.998356 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-875488fcf-gf2t7"] Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.002581 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66bf647c68-gpk5r"] Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.003319 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.013942 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-875488fcf-gf2t7"] Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.015651 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.016056 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.016140 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.016407 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.019230 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.023458 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.033199 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.042112 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66bf647c68-gpk5r"] Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.077781 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-proxy-ca-bundles\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.077942 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77246\" (UniqueName: \"kubernetes.io/projected/063af3f5-26d8-41a8-8352-d199b63754b6-kube-api-access-77246\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.078009 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-config\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.078039 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/063af3f5-26d8-41a8-8352-d199b63754b6-serving-cert\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.078078 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-client-ca\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.178531 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-client-ca\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.178574 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-proxy-ca-bundles\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.178623 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77246\" (UniqueName: \"kubernetes.io/projected/063af3f5-26d8-41a8-8352-d199b63754b6-kube-api-access-77246\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.178649 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-config\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.178674 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/063af3f5-26d8-41a8-8352-d199b63754b6-serving-cert\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.179495 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-client-ca\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.180565 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-proxy-ca-bundles\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.180781 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063af3f5-26d8-41a8-8352-d199b63754b6-config\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.183288 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/063af3f5-26d8-41a8-8352-d199b63754b6-serving-cert\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.209206 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77246\" (UniqueName: \"kubernetes.io/projected/063af3f5-26d8-41a8-8352-d199b63754b6-kube-api-access-77246\") pod \"controller-manager-66bf647c68-gpk5r\" (UID: \"063af3f5-26d8-41a8-8352-d199b63754b6\") " pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.346275 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.548033 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66bf647c68-gpk5r"] Oct 08 13:30:50 crc kubenswrapper[5065]: W1008 13:30:50.555704 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063af3f5_26d8_41a8_8352_d199b63754b6.slice/crio-9bcc8df9e19f106d187c9809468525ea8b9f284cb1c34edf9706f19b2c0044fa WatchSource:0}: Error finding container 9bcc8df9e19f106d187c9809468525ea8b9f284cb1c34edf9706f19b2c0044fa: Status 404 returned error can't find the container with id 9bcc8df9e19f106d187c9809468525ea8b9f284cb1c34edf9706f19b2c0044fa Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.880604 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b98751d-7cc7-4575-9d04-e57e18b6ae31" path="/var/lib/kubelet/pods/4b98751d-7cc7-4575-9d04-e57e18b6ae31/volumes" Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.962161 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" event={"ID":"063af3f5-26d8-41a8-8352-d199b63754b6","Type":"ContainerStarted","Data":"3fcf90ff7858632f98254d32907d3a69c353bb01d3bebf41a898ace99b031323"} Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.962211 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" event={"ID":"063af3f5-26d8-41a8-8352-d199b63754b6","Type":"ContainerStarted","Data":"9bcc8df9e19f106d187c9809468525ea8b9f284cb1c34edf9706f19b2c0044fa"} Oct 08 13:30:50 crc kubenswrapper[5065]: I1008 13:30:50.982138 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" podStartSLOduration=2.982113254 podStartE2EDuration="2.982113254s" podCreationTimestamp="2025-10-08 13:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:30:50.979613841 +0000 UTC m=+752.756995608" watchObservedRunningTime="2025-10-08 13:30:50.982113254 +0000 UTC m=+752.759495011" Oct 08 13:30:51 crc kubenswrapper[5065]: I1008 13:30:51.967121 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:51 crc kubenswrapper[5065]: I1008 13:30:51.971381 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66bf647c68-gpk5r" Oct 08 13:30:52 crc kubenswrapper[5065]: I1008 13:30:52.352193 5065 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 13:30:53 crc kubenswrapper[5065]: I1008 13:30:53.695205 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-nqrvj" Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.375626 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.375697 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.375750 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.376346 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03687d9c2628c1d5d874abdb932a1eb112aa1d5d672fca57fe617c3d9d4bd54c"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.376444 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://03687d9c2628c1d5d874abdb932a1eb112aa1d5d672fca57fe617c3d9d4bd54c" gracePeriod=600 Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.984654 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="03687d9c2628c1d5d874abdb932a1eb112aa1d5d672fca57fe617c3d9d4bd54c" exitCode=0 Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.984737 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"03687d9c2628c1d5d874abdb932a1eb112aa1d5d672fca57fe617c3d9d4bd54c"} Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.985279 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"f1a1c08caf1f5c5ebf44b5caec0b83171c54c6a08c4b6c83a6707f77736bc763"} Oct 08 13:30:54 crc kubenswrapper[5065]: I1008 13:30:54.985301 5065 scope.go:117] "RemoveContainer" containerID="73ee35ce2597ab47414b6734db202d73211201b50d506090ee412556d4772970" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.689346 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vfqj"] Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.690900 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.703316 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vfqj"] Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.877233 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2v7\" (UniqueName: \"kubernetes.io/projected/43d7071b-212b-40d3-9be1-25fd1cc0f258-kube-api-access-2w2v7\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.877299 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-catalog-content\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.877393 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-utilities\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.978708 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-catalog-content\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.978878 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-utilities\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.978989 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2v7\" (UniqueName: \"kubernetes.io/projected/43d7071b-212b-40d3-9be1-25fd1cc0f258-kube-api-access-2w2v7\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.979358 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-catalog-content\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:05 crc kubenswrapper[5065]: I1008 13:31:05.979905 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-utilities\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.003963 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2v7\" (UniqueName: \"kubernetes.io/projected/43d7071b-212b-40d3-9be1-25fd1cc0f258-kube-api-access-2w2v7\") pod \"redhat-marketplace-6vfqj\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.018693 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.434982 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vfqj"] Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.941870 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m"] Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.943946 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.945692 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.952250 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m"] Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.990336 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59c95\" (UniqueName: \"kubernetes.io/projected/ca6d2539-bba8-4625-a049-e3fa4403c861-kube-api-access-59c95\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.990735 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:06 crc kubenswrapper[5065]: I1008 13:31:06.990976 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.057055 5065 generic.go:334] "Generic (PLEG): container finished" podID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerID="943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed" exitCode=0 Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.057122 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vfqj" event={"ID":"43d7071b-212b-40d3-9be1-25fd1cc0f258","Type":"ContainerDied","Data":"943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed"} Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.057160 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vfqj" event={"ID":"43d7071b-212b-40d3-9be1-25fd1cc0f258","Type":"ContainerStarted","Data":"0a8cf2df7cbf7526583ea0dd89ad9b0fc7228d02cd59439a5486d88588b4deb3"} Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.091976 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59c95\" (UniqueName: \"kubernetes.io/projected/ca6d2539-bba8-4625-a049-e3fa4403c861-kube-api-access-59c95\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.092347 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.092532 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.092881 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.092957 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.108251 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59c95\" (UniqueName: \"kubernetes.io/projected/ca6d2539-bba8-4625-a049-e3fa4403c861-kube-api-access-59c95\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.268158 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:07 crc kubenswrapper[5065]: I1008 13:31:07.655545 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m"] Oct 08 13:31:08 crc kubenswrapper[5065]: I1008 13:31:08.066142 5065 generic.go:334] "Generic (PLEG): container finished" podID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerID="b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7" exitCode=0 Oct 08 13:31:08 crc kubenswrapper[5065]: I1008 13:31:08.066236 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vfqj" event={"ID":"43d7071b-212b-40d3-9be1-25fd1cc0f258","Type":"ContainerDied","Data":"b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7"} Oct 08 13:31:08 crc kubenswrapper[5065]: I1008 13:31:08.067989 5065 generic.go:334] "Generic (PLEG): container finished" podID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerID="c0c27b377a8af2e19fc1fedb27f4e4045748d89f44e3c4937d0f3495fff9e246" exitCode=0 Oct 08 13:31:08 crc kubenswrapper[5065]: I1008 13:31:08.068035 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" event={"ID":"ca6d2539-bba8-4625-a049-e3fa4403c861","Type":"ContainerDied","Data":"c0c27b377a8af2e19fc1fedb27f4e4045748d89f44e3c4937d0f3495fff9e246"} Oct 08 13:31:08 crc kubenswrapper[5065]: I1008 13:31:08.068070 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" event={"ID":"ca6d2539-bba8-4625-a049-e3fa4403c861","Type":"ContainerStarted","Data":"bcc653d8760f2d7b7e9d813a0c2a9e330c9c938566030fe125da4066ae037d5e"} Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.060330 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-w27qr" podUID="6877d346-fa92-428a-859c-218fdfe5ca4f" containerName="console" containerID="cri-o://9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698" gracePeriod=15 Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.075342 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vfqj" event={"ID":"43d7071b-212b-40d3-9be1-25fd1cc0f258","Type":"ContainerStarted","Data":"a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d"} Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.090652 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vfqj" podStartSLOduration=2.6679997269999998 podStartE2EDuration="4.090632722s" podCreationTimestamp="2025-10-08 13:31:05 +0000 UTC" firstStartedPulling="2025-10-08 13:31:07.059727327 +0000 UTC m=+768.837109084" lastFinishedPulling="2025-10-08 13:31:08.482360322 +0000 UTC m=+770.259742079" observedRunningTime="2025-10-08 13:31:09.089589111 +0000 UTC m=+770.866970868" watchObservedRunningTime="2025-10-08 13:31:09.090632722 +0000 UTC m=+770.868014499" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.538261 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w27qr_6877d346-fa92-428a-859c-218fdfe5ca4f/console/0.log" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.538332 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.624402 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-serving-cert\") pod \"6877d346-fa92-428a-859c-218fdfe5ca4f\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.624621 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-trusted-ca-bundle\") pod \"6877d346-fa92-428a-859c-218fdfe5ca4f\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.624652 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-console-config\") pod \"6877d346-fa92-428a-859c-218fdfe5ca4f\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.624678 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jtzw\" (UniqueName: \"kubernetes.io/projected/6877d346-fa92-428a-859c-218fdfe5ca4f-kube-api-access-5jtzw\") pod \"6877d346-fa92-428a-859c-218fdfe5ca4f\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.624724 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-service-ca\") pod \"6877d346-fa92-428a-859c-218fdfe5ca4f\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.624759 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-oauth-config\") pod \"6877d346-fa92-428a-859c-218fdfe5ca4f\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.624822 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-oauth-serving-cert\") pod \"6877d346-fa92-428a-859c-218fdfe5ca4f\" (UID: \"6877d346-fa92-428a-859c-218fdfe5ca4f\") " Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.625316 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-service-ca" (OuterVolumeSpecName: "service-ca") pod "6877d346-fa92-428a-859c-218fdfe5ca4f" (UID: "6877d346-fa92-428a-859c-218fdfe5ca4f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.625311 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-console-config" (OuterVolumeSpecName: "console-config") pod "6877d346-fa92-428a-859c-218fdfe5ca4f" (UID: "6877d346-fa92-428a-859c-218fdfe5ca4f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.625677 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6877d346-fa92-428a-859c-218fdfe5ca4f" (UID: "6877d346-fa92-428a-859c-218fdfe5ca4f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.625873 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6877d346-fa92-428a-859c-218fdfe5ca4f" (UID: "6877d346-fa92-428a-859c-218fdfe5ca4f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.626068 5065 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.626089 5065 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.626097 5065 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-console-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.626105 5065 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6877d346-fa92-428a-859c-218fdfe5ca4f-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.630140 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6877d346-fa92-428a-859c-218fdfe5ca4f" (UID: "6877d346-fa92-428a-859c-218fdfe5ca4f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.630210 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6877d346-fa92-428a-859c-218fdfe5ca4f-kube-api-access-5jtzw" (OuterVolumeSpecName: "kube-api-access-5jtzw") pod "6877d346-fa92-428a-859c-218fdfe5ca4f" (UID: "6877d346-fa92-428a-859c-218fdfe5ca4f"). InnerVolumeSpecName "kube-api-access-5jtzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.630717 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6877d346-fa92-428a-859c-218fdfe5ca4f" (UID: "6877d346-fa92-428a-859c-218fdfe5ca4f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.727757 5065 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.727793 5065 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6877d346-fa92-428a-859c-218fdfe5ca4f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:09 crc kubenswrapper[5065]: I1008 13:31:09.727803 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jtzw\" (UniqueName: \"kubernetes.io/projected/6877d346-fa92-428a-859c-218fdfe5ca4f-kube-api-access-5jtzw\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.082365 5065 generic.go:334] "Generic (PLEG): container finished" podID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerID="6d2aca428b9c9cbe9dee9be54730bc56c6dab28f9e64ac27301dfb3de9f81a75" exitCode=0 Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.082484 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" event={"ID":"ca6d2539-bba8-4625-a049-e3fa4403c861","Type":"ContainerDied","Data":"6d2aca428b9c9cbe9dee9be54730bc56c6dab28f9e64ac27301dfb3de9f81a75"} Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.085614 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w27qr_6877d346-fa92-428a-859c-218fdfe5ca4f/console/0.log" Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.085674 5065 generic.go:334] "Generic (PLEG): container finished" podID="6877d346-fa92-428a-859c-218fdfe5ca4f" containerID="9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698" exitCode=2 Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.085716 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w27qr" event={"ID":"6877d346-fa92-428a-859c-218fdfe5ca4f","Type":"ContainerDied","Data":"9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698"} Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.085752 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w27qr" event={"ID":"6877d346-fa92-428a-859c-218fdfe5ca4f","Type":"ContainerDied","Data":"c0123c109f7806ddf380e5e31cf7fd666e0d5bb11eea5fedfeadc7ec1db7b85c"} Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.085798 5065 scope.go:117] "RemoveContainer" containerID="9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698" Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.085847 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w27qr" Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.108316 5065 scope.go:117] "RemoveContainer" containerID="9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698" Oct 08 13:31:10 crc kubenswrapper[5065]: E1008 13:31:10.108815 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698\": container with ID starting with 9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698 not found: ID does not exist" containerID="9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698" Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.108857 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698"} err="failed to get container status \"9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698\": rpc error: code = NotFound desc = could not find container \"9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698\": container with ID starting with 9d0b92ab72c1f9c15403da27caec45fb61bd15a49435baf987c3eb3e882b3698 not found: ID does not exist" Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.129187 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w27qr"] Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.132143 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-w27qr"] Oct 08 13:31:10 crc kubenswrapper[5065]: I1008 13:31:10.888096 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6877d346-fa92-428a-859c-218fdfe5ca4f" path="/var/lib/kubelet/pods/6877d346-fa92-428a-859c-218fdfe5ca4f/volumes" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.095372 5065 generic.go:334] "Generic (PLEG): container finished" podID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerID="7b1ff429af4eab8fe9728551efc75547d4e2a4d6c36dd49f2bd9036e4a5f8875" exitCode=0 Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.095457 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" event={"ID":"ca6d2539-bba8-4625-a049-e3fa4403c861","Type":"ContainerDied","Data":"7b1ff429af4eab8fe9728551efc75547d4e2a4d6c36dd49f2bd9036e4a5f8875"} Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.696181 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56mb7"] Oct 08 13:31:11 crc kubenswrapper[5065]: E1008 13:31:11.696500 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6877d346-fa92-428a-859c-218fdfe5ca4f" containerName="console" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.696521 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="6877d346-fa92-428a-859c-218fdfe5ca4f" containerName="console" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.696639 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="6877d346-fa92-428a-859c-218fdfe5ca4f" containerName="console" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.697532 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.711394 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56mb7"] Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.755048 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-catalog-content\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.755168 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjfq\" (UniqueName: \"kubernetes.io/projected/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-kube-api-access-4bjfq\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.755206 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-utilities\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.856120 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bjfq\" (UniqueName: \"kubernetes.io/projected/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-kube-api-access-4bjfq\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.856183 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-utilities\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.856287 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-catalog-content\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.856867 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-catalog-content\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.856882 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-utilities\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:11 crc kubenswrapper[5065]: I1008 13:31:11.874793 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bjfq\" (UniqueName: \"kubernetes.io/projected/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-kube-api-access-4bjfq\") pod \"redhat-operators-56mb7\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.018178 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.414710 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56mb7"] Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.485240 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.665405 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59c95\" (UniqueName: \"kubernetes.io/projected/ca6d2539-bba8-4625-a049-e3fa4403c861-kube-api-access-59c95\") pod \"ca6d2539-bba8-4625-a049-e3fa4403c861\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.665515 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-util\") pod \"ca6d2539-bba8-4625-a049-e3fa4403c861\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.665540 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-bundle\") pod \"ca6d2539-bba8-4625-a049-e3fa4403c861\" (UID: \"ca6d2539-bba8-4625-a049-e3fa4403c861\") " Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.666581 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-bundle" (OuterVolumeSpecName: "bundle") pod "ca6d2539-bba8-4625-a049-e3fa4403c861" (UID: "ca6d2539-bba8-4625-a049-e3fa4403c861"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.670559 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca6d2539-bba8-4625-a049-e3fa4403c861-kube-api-access-59c95" (OuterVolumeSpecName: "kube-api-access-59c95") pod "ca6d2539-bba8-4625-a049-e3fa4403c861" (UID: "ca6d2539-bba8-4625-a049-e3fa4403c861"). InnerVolumeSpecName "kube-api-access-59c95". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.681857 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-util" (OuterVolumeSpecName: "util") pod "ca6d2539-bba8-4625-a049-e3fa4403c861" (UID: "ca6d2539-bba8-4625-a049-e3fa4403c861"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.766512 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59c95\" (UniqueName: \"kubernetes.io/projected/ca6d2539-bba8-4625-a049-e3fa4403c861-kube-api-access-59c95\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.766543 5065 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-util\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:12 crc kubenswrapper[5065]: I1008 13:31:12.766553 5065 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca6d2539-bba8-4625-a049-e3fa4403c861-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:13 crc kubenswrapper[5065]: I1008 13:31:13.112120 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" event={"ID":"ca6d2539-bba8-4625-a049-e3fa4403c861","Type":"ContainerDied","Data":"bcc653d8760f2d7b7e9d813a0c2a9e330c9c938566030fe125da4066ae037d5e"} Oct 08 13:31:13 crc kubenswrapper[5065]: I1008 13:31:13.112571 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc653d8760f2d7b7e9d813a0c2a9e330c9c938566030fe125da4066ae037d5e" Oct 08 13:31:13 crc kubenswrapper[5065]: I1008 13:31:13.112695 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m" Oct 08 13:31:13 crc kubenswrapper[5065]: I1008 13:31:13.116651 5065 generic.go:334] "Generic (PLEG): container finished" podID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerID="a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1" exitCode=0 Oct 08 13:31:13 crc kubenswrapper[5065]: I1008 13:31:13.116707 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mb7" event={"ID":"4b0272d9-790c-40bb-9fe6-0730c7ace1c1","Type":"ContainerDied","Data":"a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1"} Oct 08 13:31:13 crc kubenswrapper[5065]: I1008 13:31:13.116736 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mb7" event={"ID":"4b0272d9-790c-40bb-9fe6-0730c7ace1c1","Type":"ContainerStarted","Data":"3f52822280f325359d10fbf66ebeedcfe9ea15b1c6370e572990726a8521ff9e"} Oct 08 13:31:15 crc kubenswrapper[5065]: I1008 13:31:15.130342 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mb7" event={"ID":"4b0272d9-790c-40bb-9fe6-0730c7ace1c1","Type":"ContainerStarted","Data":"e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb"} Oct 08 13:31:16 crc kubenswrapper[5065]: I1008 13:31:16.019674 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:16 crc kubenswrapper[5065]: I1008 13:31:16.019758 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:16 crc kubenswrapper[5065]: I1008 13:31:16.075552 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:16 crc kubenswrapper[5065]: I1008 13:31:16.136958 5065 generic.go:334] "Generic (PLEG): container finished" podID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerID="e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb" exitCode=0 Oct 08 13:31:16 crc kubenswrapper[5065]: I1008 13:31:16.137077 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mb7" event={"ID":"4b0272d9-790c-40bb-9fe6-0730c7ace1c1","Type":"ContainerDied","Data":"e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb"} Oct 08 13:31:16 crc kubenswrapper[5065]: I1008 13:31:16.179322 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:17 crc kubenswrapper[5065]: I1008 13:31:17.144594 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mb7" event={"ID":"4b0272d9-790c-40bb-9fe6-0730c7ace1c1","Type":"ContainerStarted","Data":"2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260"} Oct 08 13:31:17 crc kubenswrapper[5065]: I1008 13:31:17.162572 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56mb7" podStartSLOduration=2.739292528 podStartE2EDuration="6.162558516s" podCreationTimestamp="2025-10-08 13:31:11 +0000 UTC" firstStartedPulling="2025-10-08 13:31:13.12045341 +0000 UTC m=+774.897835187" lastFinishedPulling="2025-10-08 13:31:16.543719428 +0000 UTC m=+778.321101175" observedRunningTime="2025-10-08 13:31:17.160702882 +0000 UTC m=+778.938084639" watchObservedRunningTime="2025-10-08 13:31:17.162558516 +0000 UTC m=+778.939940273" Oct 08 13:31:19 crc kubenswrapper[5065]: I1008 13:31:19.695490 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vfqj"] Oct 08 13:31:19 crc kubenswrapper[5065]: I1008 13:31:19.695962 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vfqj" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="registry-server" containerID="cri-o://a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d" gracePeriod=2 Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.150342 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.159613 5065 generic.go:334] "Generic (PLEG): container finished" podID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerID="a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d" exitCode=0 Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.159660 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vfqj" event={"ID":"43d7071b-212b-40d3-9be1-25fd1cc0f258","Type":"ContainerDied","Data":"a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d"} Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.159673 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vfqj" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.159689 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vfqj" event={"ID":"43d7071b-212b-40d3-9be1-25fd1cc0f258","Type":"ContainerDied","Data":"0a8cf2df7cbf7526583ea0dd89ad9b0fc7228d02cd59439a5486d88588b4deb3"} Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.159712 5065 scope.go:117] "RemoveContainer" containerID="a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.179921 5065 scope.go:117] "RemoveContainer" containerID="b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.209508 5065 scope.go:117] "RemoveContainer" containerID="943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.241843 5065 scope.go:117] "RemoveContainer" containerID="a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d" Oct 08 13:31:20 crc kubenswrapper[5065]: E1008 13:31:20.242267 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d\": container with ID starting with a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d not found: ID does not exist" containerID="a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.242304 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d"} err="failed to get container status \"a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d\": rpc error: code = NotFound desc = could not find container \"a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d\": container with ID starting with a630ca78a0d6509cbf2825f1bb7fe37c37206bbb7f9eed1e3006a761cc1f500d not found: ID does not exist" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.242332 5065 scope.go:117] "RemoveContainer" containerID="b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7" Oct 08 13:31:20 crc kubenswrapper[5065]: E1008 13:31:20.242573 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7\": container with ID starting with b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7 not found: ID does not exist" containerID="b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.242602 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7"} err="failed to get container status \"b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7\": rpc error: code = NotFound desc = could not find container \"b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7\": container with ID starting with b60578bc75ebff1be8e8e288cf0e8ecc1963243d5a4fe243887f64301c3114b7 not found: ID does not exist" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.242616 5065 scope.go:117] "RemoveContainer" containerID="943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed" Oct 08 13:31:20 crc kubenswrapper[5065]: E1008 13:31:20.242833 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed\": container with ID starting with 943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed not found: ID does not exist" containerID="943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.242859 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed"} err="failed to get container status \"943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed\": rpc error: code = NotFound desc = could not find container \"943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed\": container with ID starting with 943e9861c3f48823ecf03ef6bdf581fce3d605a8414231ef9460a2aa9806d6ed not found: ID does not exist" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.260352 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w2v7\" (UniqueName: \"kubernetes.io/projected/43d7071b-212b-40d3-9be1-25fd1cc0f258-kube-api-access-2w2v7\") pod \"43d7071b-212b-40d3-9be1-25fd1cc0f258\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.260403 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-catalog-content\") pod \"43d7071b-212b-40d3-9be1-25fd1cc0f258\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.260463 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-utilities\") pod \"43d7071b-212b-40d3-9be1-25fd1cc0f258\" (UID: \"43d7071b-212b-40d3-9be1-25fd1cc0f258\") " Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.261344 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-utilities" (OuterVolumeSpecName: "utilities") pod "43d7071b-212b-40d3-9be1-25fd1cc0f258" (UID: "43d7071b-212b-40d3-9be1-25fd1cc0f258"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.271079 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d7071b-212b-40d3-9be1-25fd1cc0f258-kube-api-access-2w2v7" (OuterVolumeSpecName: "kube-api-access-2w2v7") pod "43d7071b-212b-40d3-9be1-25fd1cc0f258" (UID: "43d7071b-212b-40d3-9be1-25fd1cc0f258"). InnerVolumeSpecName "kube-api-access-2w2v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.274710 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43d7071b-212b-40d3-9be1-25fd1cc0f258" (UID: "43d7071b-212b-40d3-9be1-25fd1cc0f258"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.361853 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w2v7\" (UniqueName: \"kubernetes.io/projected/43d7071b-212b-40d3-9be1-25fd1cc0f258-kube-api-access-2w2v7\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.361889 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.361901 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d7071b-212b-40d3-9be1-25fd1cc0f258-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.484301 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vfqj"] Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.486972 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vfqj"] Oct 08 13:31:20 crc kubenswrapper[5065]: I1008 13:31:20.880544 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" path="/var/lib/kubelet/pods/43d7071b-212b-40d3-9be1-25fd1cc0f258/volumes" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.018682 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.018983 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.072605 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.269992 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452520 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf"] Oct 08 13:31:22 crc kubenswrapper[5065]: E1008 13:31:22.452752 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="registry-server" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452768 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="registry-server" Oct 08 13:31:22 crc kubenswrapper[5065]: E1008 13:31:22.452782 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerName="extract" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452788 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerName="extract" Oct 08 13:31:22 crc kubenswrapper[5065]: E1008 13:31:22.452797 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="extract-content" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452804 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="extract-content" Oct 08 13:31:22 crc kubenswrapper[5065]: E1008 13:31:22.452815 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerName="util" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452821 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerName="util" Oct 08 13:31:22 crc kubenswrapper[5065]: E1008 13:31:22.452829 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerName="pull" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452836 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerName="pull" Oct 08 13:31:22 crc kubenswrapper[5065]: E1008 13:31:22.452844 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="extract-utilities" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452851 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="extract-utilities" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452944 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d7071b-212b-40d3-9be1-25fd1cc0f258" containerName="registry-server" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.452955 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca6d2539-bba8-4625-a049-e3fa4403c861" containerName="extract" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.453297 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.455277 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.455639 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mgl2b" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.455749 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.456804 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.460359 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.465933 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf"] Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.600538 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm765\" (UniqueName: \"kubernetes.io/projected/f4a0f022-1142-4830-aaff-7458bd6c1c94-kube-api-access-wm765\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.600704 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f4a0f022-1142-4830-aaff-7458bd6c1c94-apiservice-cert\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.600805 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4a0f022-1142-4830-aaff-7458bd6c1c94-webhook-cert\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.689710 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-f4447b86b-749tv"] Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.690307 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.692937 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.693000 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.693159 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wcd8m" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.701467 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f4a0f022-1142-4830-aaff-7458bd6c1c94-apiservice-cert\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.701541 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4a0f022-1142-4830-aaff-7458bd6c1c94-webhook-cert\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.701585 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm765\" (UniqueName: \"kubernetes.io/projected/f4a0f022-1142-4830-aaff-7458bd6c1c94-kube-api-access-wm765\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.708496 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4a0f022-1142-4830-aaff-7458bd6c1c94-webhook-cert\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.712035 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f4a0f022-1142-4830-aaff-7458bd6c1c94-apiservice-cert\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.712105 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-f4447b86b-749tv"] Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.751071 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm765\" (UniqueName: \"kubernetes.io/projected/f4a0f022-1142-4830-aaff-7458bd6c1c94-kube-api-access-wm765\") pod \"metallb-operator-controller-manager-5fbf659dfd-bttmf\" (UID: \"f4a0f022-1142-4830-aaff-7458bd6c1c94\") " pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.769653 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.802808 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhps\" (UniqueName: \"kubernetes.io/projected/aa74013a-ec8e-45ca-9150-99e2a326e28f-kube-api-access-jzhps\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.802852 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aa74013a-ec8e-45ca-9150-99e2a326e28f-apiservice-cert\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.802892 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aa74013a-ec8e-45ca-9150-99e2a326e28f-webhook-cert\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.903912 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzhps\" (UniqueName: \"kubernetes.io/projected/aa74013a-ec8e-45ca-9150-99e2a326e28f-kube-api-access-jzhps\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.904234 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aa74013a-ec8e-45ca-9150-99e2a326e28f-apiservice-cert\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.904269 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aa74013a-ec8e-45ca-9150-99e2a326e28f-webhook-cert\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.908013 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aa74013a-ec8e-45ca-9150-99e2a326e28f-apiservice-cert\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.909046 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aa74013a-ec8e-45ca-9150-99e2a326e28f-webhook-cert\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:22 crc kubenswrapper[5065]: I1008 13:31:22.925511 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzhps\" (UniqueName: \"kubernetes.io/projected/aa74013a-ec8e-45ca-9150-99e2a326e28f-kube-api-access-jzhps\") pod \"metallb-operator-webhook-server-f4447b86b-749tv\" (UID: \"aa74013a-ec8e-45ca-9150-99e2a326e28f\") " pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:23 crc kubenswrapper[5065]: I1008 13:31:23.005122 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:23 crc kubenswrapper[5065]: I1008 13:31:23.198162 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf"] Oct 08 13:31:23 crc kubenswrapper[5065]: I1008 13:31:23.488600 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-f4447b86b-749tv"] Oct 08 13:31:23 crc kubenswrapper[5065]: W1008 13:31:23.492169 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa74013a_ec8e_45ca_9150_99e2a326e28f.slice/crio-8477492a7b79f10b22e410ef467d37e951475068ae68c1154689d14deaf3a422 WatchSource:0}: Error finding container 8477492a7b79f10b22e410ef467d37e951475068ae68c1154689d14deaf3a422: Status 404 returned error can't find the container with id 8477492a7b79f10b22e410ef467d37e951475068ae68c1154689d14deaf3a422 Oct 08 13:31:24 crc kubenswrapper[5065]: I1008 13:31:24.181603 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" event={"ID":"f4a0f022-1142-4830-aaff-7458bd6c1c94","Type":"ContainerStarted","Data":"4f048534c56da1b018f05b980ef2d5de62b55be94068a05ca52273b7958b46a6"} Oct 08 13:31:24 crc kubenswrapper[5065]: I1008 13:31:24.182804 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" event={"ID":"aa74013a-ec8e-45ca-9150-99e2a326e28f","Type":"ContainerStarted","Data":"8477492a7b79f10b22e410ef467d37e951475068ae68c1154689d14deaf3a422"} Oct 08 13:31:25 crc kubenswrapper[5065]: I1008 13:31:25.681280 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56mb7"] Oct 08 13:31:25 crc kubenswrapper[5065]: I1008 13:31:25.682896 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56mb7" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="registry-server" containerID="cri-o://2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260" gracePeriod=2 Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.072510 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.148601 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-catalog-content\") pod \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.148655 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bjfq\" (UniqueName: \"kubernetes.io/projected/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-kube-api-access-4bjfq\") pod \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.148694 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-utilities\") pod \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\" (UID: \"4b0272d9-790c-40bb-9fe6-0730c7ace1c1\") " Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.149632 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-utilities" (OuterVolumeSpecName: "utilities") pod "4b0272d9-790c-40bb-9fe6-0730c7ace1c1" (UID: "4b0272d9-790c-40bb-9fe6-0730c7ace1c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.154632 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-kube-api-access-4bjfq" (OuterVolumeSpecName: "kube-api-access-4bjfq") pod "4b0272d9-790c-40bb-9fe6-0730c7ace1c1" (UID: "4b0272d9-790c-40bb-9fe6-0730c7ace1c1"). InnerVolumeSpecName "kube-api-access-4bjfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.194586 5065 generic.go:334] "Generic (PLEG): container finished" podID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerID="2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260" exitCode=0 Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.194638 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56mb7" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.194636 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mb7" event={"ID":"4b0272d9-790c-40bb-9fe6-0730c7ace1c1","Type":"ContainerDied","Data":"2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260"} Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.194721 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56mb7" event={"ID":"4b0272d9-790c-40bb-9fe6-0730c7ace1c1","Type":"ContainerDied","Data":"3f52822280f325359d10fbf66ebeedcfe9ea15b1c6370e572990726a8521ff9e"} Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.194771 5065 scope.go:117] "RemoveContainer" containerID="2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.217670 5065 scope.go:117] "RemoveContainer" containerID="e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.233168 5065 scope.go:117] "RemoveContainer" containerID="a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.250040 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bjfq\" (UniqueName: \"kubernetes.io/projected/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-kube-api-access-4bjfq\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.250076 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.251571 5065 scope.go:117] "RemoveContainer" containerID="2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260" Oct 08 13:31:26 crc kubenswrapper[5065]: E1008 13:31:26.252851 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260\": container with ID starting with 2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260 not found: ID does not exist" containerID="2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.252886 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260"} err="failed to get container status \"2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260\": rpc error: code = NotFound desc = could not find container \"2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260\": container with ID starting with 2a94fe75830b28cfef39829faf39f2d9d6547be7ee9c81c5d48a46a9175c0260 not found: ID does not exist" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.252913 5065 scope.go:117] "RemoveContainer" containerID="e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb" Oct 08 13:31:26 crc kubenswrapper[5065]: E1008 13:31:26.253327 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb\": container with ID starting with e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb not found: ID does not exist" containerID="e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.253347 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb"} err="failed to get container status \"e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb\": rpc error: code = NotFound desc = could not find container \"e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb\": container with ID starting with e7d42b8fe1b2a1b916a4ec28281c19d345037aa9de0d3408744d55441e017cfb not found: ID does not exist" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.253359 5065 scope.go:117] "RemoveContainer" containerID="a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1" Oct 08 13:31:26 crc kubenswrapper[5065]: E1008 13:31:26.253625 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1\": container with ID starting with a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1 not found: ID does not exist" containerID="a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.253647 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1"} err="failed to get container status \"a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1\": rpc error: code = NotFound desc = could not find container \"a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1\": container with ID starting with a76b7a3d58a6d7bead3154ade6697ed9e19eaf99083133fe066f4085d7e11dd1 not found: ID does not exist" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.817813 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b0272d9-790c-40bb-9fe6-0730c7ace1c1" (UID: "4b0272d9-790c-40bb-9fe6-0730c7ace1c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:26 crc kubenswrapper[5065]: I1008 13:31:26.861279 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0272d9-790c-40bb-9fe6-0730c7ace1c1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.128237 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56mb7"] Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.141244 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56mb7"] Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.891350 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s8r6s"] Oct 08 13:31:27 crc kubenswrapper[5065]: E1008 13:31:27.891793 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="extract-content" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.891804 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="extract-content" Oct 08 13:31:27 crc kubenswrapper[5065]: E1008 13:31:27.891814 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="extract-utilities" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.891821 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="extract-utilities" Oct 08 13:31:27 crc kubenswrapper[5065]: E1008 13:31:27.891832 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="registry-server" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.891838 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="registry-server" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.891983 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" containerName="registry-server" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.893625 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.903259 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8r6s"] Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.975463 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-utilities\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.975763 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-catalog-content\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:27 crc kubenswrapper[5065]: I1008 13:31:27.975925 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8kmt\" (UniqueName: \"kubernetes.io/projected/ab061f71-e1a9-4305-b73c-99b569ee1202-kube-api-access-q8kmt\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.077427 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8kmt\" (UniqueName: \"kubernetes.io/projected/ab061f71-e1a9-4305-b73c-99b569ee1202-kube-api-access-q8kmt\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.077519 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-utilities\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.077606 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-catalog-content\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.078055 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-utilities\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.078081 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-catalog-content\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.111489 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8kmt\" (UniqueName: \"kubernetes.io/projected/ab061f71-e1a9-4305-b73c-99b569ee1202-kube-api-access-q8kmt\") pod \"community-operators-s8r6s\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.226823 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.871867 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8r6s"] Oct 08 13:31:28 crc kubenswrapper[5065]: I1008 13:31:28.881921 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b0272d9-790c-40bb-9fe6-0730c7ace1c1" path="/var/lib/kubelet/pods/4b0272d9-790c-40bb-9fe6-0730c7ace1c1/volumes" Oct 08 13:31:29 crc kubenswrapper[5065]: I1008 13:31:29.222579 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8r6s" event={"ID":"ab061f71-e1a9-4305-b73c-99b569ee1202","Type":"ContainerStarted","Data":"2d7b0b585cd24f6b446e0fc1dfdd2e6c134704155db8f75c463f7e71aae551cb"} Oct 08 13:31:29 crc kubenswrapper[5065]: I1008 13:31:29.225801 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" event={"ID":"aa74013a-ec8e-45ca-9150-99e2a326e28f","Type":"ContainerStarted","Data":"18dd384d91cbdd5f915be115815ba9f22e1d585d30a551b49391b384a7e348cb"} Oct 08 13:31:29 crc kubenswrapper[5065]: I1008 13:31:29.226119 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:29 crc kubenswrapper[5065]: I1008 13:31:29.244252 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" podStartSLOduration=2.32277681 podStartE2EDuration="7.244237766s" podCreationTimestamp="2025-10-08 13:31:22 +0000 UTC" firstStartedPulling="2025-10-08 13:31:23.495630833 +0000 UTC m=+785.273012610" lastFinishedPulling="2025-10-08 13:31:28.417091809 +0000 UTC m=+790.194473566" observedRunningTime="2025-10-08 13:31:29.242540989 +0000 UTC m=+791.019922766" watchObservedRunningTime="2025-10-08 13:31:29.244237766 +0000 UTC m=+791.021619523" Oct 08 13:31:30 crc kubenswrapper[5065]: I1008 13:31:30.233397 5065 generic.go:334] "Generic (PLEG): container finished" podID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerID="811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5" exitCode=0 Oct 08 13:31:30 crc kubenswrapper[5065]: I1008 13:31:30.233521 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8r6s" event={"ID":"ab061f71-e1a9-4305-b73c-99b569ee1202","Type":"ContainerDied","Data":"811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5"} Oct 08 13:31:31 crc kubenswrapper[5065]: I1008 13:31:31.240950 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" event={"ID":"f4a0f022-1142-4830-aaff-7458bd6c1c94","Type":"ContainerStarted","Data":"0b2f8394911058117313585e371b4533638c6f06f1c0c0b98e9a162c9a512eaa"} Oct 08 13:31:31 crc kubenswrapper[5065]: I1008 13:31:31.241297 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:31:31 crc kubenswrapper[5065]: I1008 13:31:31.261543 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" podStartSLOduration=1.887767478 podStartE2EDuration="9.261523901s" podCreationTimestamp="2025-10-08 13:31:22 +0000 UTC" firstStartedPulling="2025-10-08 13:31:23.214029403 +0000 UTC m=+784.991411160" lastFinishedPulling="2025-10-08 13:31:30.587785826 +0000 UTC m=+792.365167583" observedRunningTime="2025-10-08 13:31:31.259538236 +0000 UTC m=+793.036919993" watchObservedRunningTime="2025-10-08 13:31:31.261523901 +0000 UTC m=+793.038905668" Oct 08 13:31:32 crc kubenswrapper[5065]: I1008 13:31:32.248273 5065 generic.go:334] "Generic (PLEG): container finished" podID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerID="b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76" exitCode=0 Oct 08 13:31:32 crc kubenswrapper[5065]: I1008 13:31:32.248335 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8r6s" event={"ID":"ab061f71-e1a9-4305-b73c-99b569ee1202","Type":"ContainerDied","Data":"b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76"} Oct 08 13:31:33 crc kubenswrapper[5065]: I1008 13:31:33.255015 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8r6s" event={"ID":"ab061f71-e1a9-4305-b73c-99b569ee1202","Type":"ContainerStarted","Data":"4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad"} Oct 08 13:31:33 crc kubenswrapper[5065]: I1008 13:31:33.269937 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s8r6s" podStartSLOduration=3.968839692 podStartE2EDuration="6.269916869s" podCreationTimestamp="2025-10-08 13:31:27 +0000 UTC" firstStartedPulling="2025-10-08 13:31:30.526068994 +0000 UTC m=+792.303450751" lastFinishedPulling="2025-10-08 13:31:32.827146151 +0000 UTC m=+794.604527928" observedRunningTime="2025-10-08 13:31:33.269325792 +0000 UTC m=+795.046707569" watchObservedRunningTime="2025-10-08 13:31:33.269916869 +0000 UTC m=+795.047298626" Oct 08 13:31:38 crc kubenswrapper[5065]: I1008 13:31:38.226996 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:38 crc kubenswrapper[5065]: I1008 13:31:38.227259 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:38 crc kubenswrapper[5065]: I1008 13:31:38.266516 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:38 crc kubenswrapper[5065]: I1008 13:31:38.319065 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:40 crc kubenswrapper[5065]: I1008 13:31:40.885131 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s8r6s"] Oct 08 13:31:40 crc kubenswrapper[5065]: I1008 13:31:40.885796 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s8r6s" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="registry-server" containerID="cri-o://4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad" gracePeriod=2 Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.254762 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.306966 5065 generic.go:334] "Generic (PLEG): container finished" podID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerID="4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad" exitCode=0 Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.306999 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8r6s" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.307012 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8r6s" event={"ID":"ab061f71-e1a9-4305-b73c-99b569ee1202","Type":"ContainerDied","Data":"4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad"} Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.307041 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8r6s" event={"ID":"ab061f71-e1a9-4305-b73c-99b569ee1202","Type":"ContainerDied","Data":"2d7b0b585cd24f6b446e0fc1dfdd2e6c134704155db8f75c463f7e71aae551cb"} Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.307062 5065 scope.go:117] "RemoveContainer" containerID="4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.324957 5065 scope.go:117] "RemoveContainer" containerID="b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.338192 5065 scope.go:117] "RemoveContainer" containerID="811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.344055 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-catalog-content\") pod \"ab061f71-e1a9-4305-b73c-99b569ee1202\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.344118 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-utilities\") pod \"ab061f71-e1a9-4305-b73c-99b569ee1202\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.344230 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8kmt\" (UniqueName: \"kubernetes.io/projected/ab061f71-e1a9-4305-b73c-99b569ee1202-kube-api-access-q8kmt\") pod \"ab061f71-e1a9-4305-b73c-99b569ee1202\" (UID: \"ab061f71-e1a9-4305-b73c-99b569ee1202\") " Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.345326 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-utilities" (OuterVolumeSpecName: "utilities") pod "ab061f71-e1a9-4305-b73c-99b569ee1202" (UID: "ab061f71-e1a9-4305-b73c-99b569ee1202"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.353684 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab061f71-e1a9-4305-b73c-99b569ee1202-kube-api-access-q8kmt" (OuterVolumeSpecName: "kube-api-access-q8kmt") pod "ab061f71-e1a9-4305-b73c-99b569ee1202" (UID: "ab061f71-e1a9-4305-b73c-99b569ee1202"). InnerVolumeSpecName "kube-api-access-q8kmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.377607 5065 scope.go:117] "RemoveContainer" containerID="4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad" Oct 08 13:31:41 crc kubenswrapper[5065]: E1008 13:31:41.383836 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad\": container with ID starting with 4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad not found: ID does not exist" containerID="4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.383901 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad"} err="failed to get container status \"4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad\": rpc error: code = NotFound desc = could not find container \"4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad\": container with ID starting with 4b1fd32a15610d6b8763f75647e61d6152e7d026795c7454a971056efa8196ad not found: ID does not exist" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.383933 5065 scope.go:117] "RemoveContainer" containerID="b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76" Oct 08 13:31:41 crc kubenswrapper[5065]: E1008 13:31:41.384437 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76\": container with ID starting with b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76 not found: ID does not exist" containerID="b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.384466 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76"} err="failed to get container status \"b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76\": rpc error: code = NotFound desc = could not find container \"b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76\": container with ID starting with b6c7ba8b6f5dfb7ac70489b603b5bad61c72c59f0af956d7c9b0b3ab4ebf1f76 not found: ID does not exist" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.384484 5065 scope.go:117] "RemoveContainer" containerID="811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5" Oct 08 13:31:41 crc kubenswrapper[5065]: E1008 13:31:41.384705 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5\": container with ID starting with 811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5 not found: ID does not exist" containerID="811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.384729 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5"} err="failed to get container status \"811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5\": rpc error: code = NotFound desc = could not find container \"811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5\": container with ID starting with 811b1745ff5cafb458103a2414491a85a8ab16d00accf065ebef31488cefb9c5 not found: ID does not exist" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.414350 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab061f71-e1a9-4305-b73c-99b569ee1202" (UID: "ab061f71-e1a9-4305-b73c-99b569ee1202"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.445263 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.445305 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab061f71-e1a9-4305-b73c-99b569ee1202-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.445319 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8kmt\" (UniqueName: \"kubernetes.io/projected/ab061f71-e1a9-4305-b73c-99b569ee1202-kube-api-access-q8kmt\") on node \"crc\" DevicePath \"\"" Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.629492 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s8r6s"] Oct 08 13:31:41 crc kubenswrapper[5065]: I1008 13:31:41.635062 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s8r6s"] Oct 08 13:31:42 crc kubenswrapper[5065]: I1008 13:31:42.880246 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" path="/var/lib/kubelet/pods/ab061f71-e1a9-4305-b73c-99b569ee1202/volumes" Oct 08 13:31:43 crc kubenswrapper[5065]: I1008 13:31:43.012692 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-f4447b86b-749tv" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.097657 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hwt7f"] Oct 08 13:31:48 crc kubenswrapper[5065]: E1008 13:31:48.098666 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="registry-server" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.098720 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="registry-server" Oct 08 13:31:48 crc kubenswrapper[5065]: E1008 13:31:48.098732 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="extract-utilities" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.098738 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="extract-utilities" Oct 08 13:31:48 crc kubenswrapper[5065]: E1008 13:31:48.098747 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="extract-content" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.098752 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="extract-content" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.099582 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab061f71-e1a9-4305-b73c-99b569ee1202" containerName="registry-server" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.105296 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.119639 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwt7f"] Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.232434 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pm79\" (UniqueName: \"kubernetes.io/projected/ee894e89-6ecf-47d8-963a-85d1ed74cdce-kube-api-access-7pm79\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.232507 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-utilities\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.232549 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-catalog-content\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.333589 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-catalog-content\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.333722 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pm79\" (UniqueName: \"kubernetes.io/projected/ee894e89-6ecf-47d8-963a-85d1ed74cdce-kube-api-access-7pm79\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.333803 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-utilities\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.334155 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-catalog-content\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.334168 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-utilities\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.352186 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pm79\" (UniqueName: \"kubernetes.io/projected/ee894e89-6ecf-47d8-963a-85d1ed74cdce-kube-api-access-7pm79\") pod \"certified-operators-hwt7f\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.467054 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:48 crc kubenswrapper[5065]: I1008 13:31:48.781140 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwt7f"] Oct 08 13:31:49 crc kubenswrapper[5065]: I1008 13:31:49.356426 5065 generic.go:334] "Generic (PLEG): container finished" podID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerID="76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154" exitCode=0 Oct 08 13:31:49 crc kubenswrapper[5065]: I1008 13:31:49.356474 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwt7f" event={"ID":"ee894e89-6ecf-47d8-963a-85d1ed74cdce","Type":"ContainerDied","Data":"76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154"} Oct 08 13:31:49 crc kubenswrapper[5065]: I1008 13:31:49.356502 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwt7f" event={"ID":"ee894e89-6ecf-47d8-963a-85d1ed74cdce","Type":"ContainerStarted","Data":"3c41770722ad0656fa0d61e3ed723adf0187729083b69da750582c3157fd07cd"} Oct 08 13:31:53 crc kubenswrapper[5065]: I1008 13:31:53.379317 5065 generic.go:334] "Generic (PLEG): container finished" podID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerID="8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25" exitCode=0 Oct 08 13:31:53 crc kubenswrapper[5065]: I1008 13:31:53.379373 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwt7f" event={"ID":"ee894e89-6ecf-47d8-963a-85d1ed74cdce","Type":"ContainerDied","Data":"8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25"} Oct 08 13:31:54 crc kubenswrapper[5065]: I1008 13:31:54.386075 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwt7f" event={"ID":"ee894e89-6ecf-47d8-963a-85d1ed74cdce","Type":"ContainerStarted","Data":"ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d"} Oct 08 13:31:54 crc kubenswrapper[5065]: I1008 13:31:54.404704 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hwt7f" podStartSLOduration=1.998712877 podStartE2EDuration="6.404682055s" podCreationTimestamp="2025-10-08 13:31:48 +0000 UTC" firstStartedPulling="2025-10-08 13:31:49.357760987 +0000 UTC m=+811.135142734" lastFinishedPulling="2025-10-08 13:31:53.763730155 +0000 UTC m=+815.541111912" observedRunningTime="2025-10-08 13:31:54.401033044 +0000 UTC m=+816.178414831" watchObservedRunningTime="2025-10-08 13:31:54.404682055 +0000 UTC m=+816.182063812" Oct 08 13:31:58 crc kubenswrapper[5065]: I1008 13:31:58.468086 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:58 crc kubenswrapper[5065]: I1008 13:31:58.468459 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:58 crc kubenswrapper[5065]: I1008 13:31:58.505789 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:31:59 crc kubenswrapper[5065]: I1008 13:31:59.447719 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 13:32:01 crc kubenswrapper[5065]: I1008 13:32:01.106408 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwt7f"] Oct 08 13:32:01 crc kubenswrapper[5065]: I1008 13:32:01.482742 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9bzzz"] Oct 08 13:32:01 crc kubenswrapper[5065]: I1008 13:32:01.482974 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9bzzz" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="registry-server" containerID="cri-o://3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b" gracePeriod=2 Oct 08 13:32:01 crc kubenswrapper[5065]: I1008 13:32:01.875456 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.003038 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-utilities\") pod \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.003102 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgrd7\" (UniqueName: \"kubernetes.io/projected/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-kube-api-access-tgrd7\") pod \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.003150 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-catalog-content\") pod \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\" (UID: \"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7\") " Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.017064 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-utilities" (OuterVolumeSpecName: "utilities") pod "d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" (UID: "d9d5b5c0-1af6-4906-a887-cdaa05da4ce7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.023557 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-kube-api-access-tgrd7" (OuterVolumeSpecName: "kube-api-access-tgrd7") pod "d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" (UID: "d9d5b5c0-1af6-4906-a887-cdaa05da4ce7"). InnerVolumeSpecName "kube-api-access-tgrd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.061850 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" (UID: "d9d5b5c0-1af6-4906-a887-cdaa05da4ce7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.104622 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.104666 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgrd7\" (UniqueName: \"kubernetes.io/projected/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-kube-api-access-tgrd7\") on node \"crc\" DevicePath \"\"" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.104680 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.430518 5065 generic.go:334] "Generic (PLEG): container finished" podID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerID="3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b" exitCode=0 Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.430556 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bzzz" event={"ID":"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7","Type":"ContainerDied","Data":"3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b"} Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.430569 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bzzz" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.430592 5065 scope.go:117] "RemoveContainer" containerID="3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.430581 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bzzz" event={"ID":"d9d5b5c0-1af6-4906-a887-cdaa05da4ce7","Type":"ContainerDied","Data":"baa4545861c40bbc74a4a1d289cbb007fb45ffb080f3e2ef34106b70129541ec"} Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.448978 5065 scope.go:117] "RemoveContainer" containerID="2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.466616 5065 scope.go:117] "RemoveContainer" containerID="2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.467306 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9bzzz"] Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.469979 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9bzzz"] Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.484379 5065 scope.go:117] "RemoveContainer" containerID="3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b" Oct 08 13:32:02 crc kubenswrapper[5065]: E1008 13:32:02.485097 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b\": container with ID starting with 3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b not found: ID does not exist" containerID="3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.485132 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b"} err="failed to get container status \"3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b\": rpc error: code = NotFound desc = could not find container \"3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b\": container with ID starting with 3dc127d63e2a0395c119ea30221331c86cbd4bef3dc9f59eeb10afc82a06581b not found: ID does not exist" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.485158 5065 scope.go:117] "RemoveContainer" containerID="2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c" Oct 08 13:32:02 crc kubenswrapper[5065]: E1008 13:32:02.485630 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c\": container with ID starting with 2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c not found: ID does not exist" containerID="2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.485682 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c"} err="failed to get container status \"2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c\": rpc error: code = NotFound desc = could not find container \"2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c\": container with ID starting with 2f3c7c066525a0f9a6b40e7884f7582eee933ca39660148c2c58db0235ce785c not found: ID does not exist" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.485696 5065 scope.go:117] "RemoveContainer" containerID="2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57" Oct 08 13:32:02 crc kubenswrapper[5065]: E1008 13:32:02.486055 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57\": container with ID starting with 2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57 not found: ID does not exist" containerID="2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.486108 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57"} err="failed to get container status \"2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57\": rpc error: code = NotFound desc = could not find container \"2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57\": container with ID starting with 2e3989aafa0f495d411fc6db5abc5edc0a1b7f3e17d0bf1700b5415645636e57 not found: ID does not exist" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.772787 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5fbf659dfd-bttmf" Oct 08 13:32:02 crc kubenswrapper[5065]: I1008 13:32:02.880065 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" path="/var/lib/kubelet/pods/d9d5b5c0-1af6-4906-a887-cdaa05da4ce7/volumes" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.525240 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jkbrg"] Oct 08 13:32:03 crc kubenswrapper[5065]: E1008 13:32:03.525526 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="registry-server" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.525543 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="registry-server" Oct 08 13:32:03 crc kubenswrapper[5065]: E1008 13:32:03.525561 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="extract-utilities" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.525569 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="extract-utilities" Oct 08 13:32:03 crc kubenswrapper[5065]: E1008 13:32:03.525594 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="extract-content" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.525604 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="extract-content" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.525735 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d5b5c0-1af6-4906-a887-cdaa05da4ce7" containerName="registry-server" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.528012 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.529839 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.530068 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-677zh" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.530179 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.539013 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8"] Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.539852 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.542178 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.550820 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8"] Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.619923 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fqqvb"] Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.621193 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.623662 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.623874 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.624011 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.624191 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-z7n6k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626097 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcw6\" (UniqueName: \"kubernetes.io/projected/bed73333-26e4-481b-a2f6-4161b3208832-kube-api-access-plcw6\") pod \"frr-k8s-webhook-server-64bf5d555-fzmv8\" (UID: \"bed73333-26e4-481b-a2f6-4161b3208832\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626162 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-reloader\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626204 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-conf\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626296 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmt2\" (UniqueName: \"kubernetes.io/projected/8e4d5cef-564b-426e-9481-afef3c9f6b52-kube-api-access-xsmt2\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626337 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics-certs\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626363 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626383 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-sockets\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626403 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-startup\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.626443 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bed73333-26e4-481b-a2f6-4161b3208832-cert\") pod \"frr-k8s-webhook-server-64bf5d555-fzmv8\" (UID: \"bed73333-26e4-481b-a2f6-4161b3208832\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.651696 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-f288k"] Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.652630 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.656951 5065 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.666902 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-f288k"] Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727097 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-reloader\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727149 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-conf\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727170 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727206 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-metrics-certs\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727235 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsmt2\" (UniqueName: \"kubernetes.io/projected/8e4d5cef-564b-426e-9481-afef3c9f6b52-kube-api-access-xsmt2\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727262 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics-certs\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727287 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727312 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65acc9b9-f518-4f3a-857f-e86f6e453478-metallb-excludel2\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727333 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-sockets\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727356 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-startup\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727371 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bed73333-26e4-481b-a2f6-4161b3208832-cert\") pod \"frr-k8s-webhook-server-64bf5d555-fzmv8\" (UID: \"bed73333-26e4-481b-a2f6-4161b3208832\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727389 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-cert\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: E1008 13:32:03.727511 5065 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Oct 08 13:32:03 crc kubenswrapper[5065]: E1008 13:32:03.727570 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics-certs podName:8e4d5cef-564b-426e-9481-afef3c9f6b52 nodeName:}" failed. No retries permitted until 2025-10-08 13:32:04.227551732 +0000 UTC m=+826.004933599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics-certs") pod "frr-k8s-jkbrg" (UID: "8e4d5cef-564b-426e-9481-afef3c9f6b52") : secret "frr-k8s-certs-secret" not found Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727612 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-conf\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.728189 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-sockets\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.728307 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.728370 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8e4d5cef-564b-426e-9481-afef3c9f6b52-reloader\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.728383 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8e4d5cef-564b-426e-9481-afef3c9f6b52-frr-startup\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.727515 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-metrics-certs\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.728472 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqkj\" (UniqueName: \"kubernetes.io/projected/65acc9b9-f518-4f3a-857f-e86f6e453478-kube-api-access-5jqkj\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.728496 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plcw6\" (UniqueName: \"kubernetes.io/projected/bed73333-26e4-481b-a2f6-4161b3208832-kube-api-access-plcw6\") pod \"frr-k8s-webhook-server-64bf5d555-fzmv8\" (UID: \"bed73333-26e4-481b-a2f6-4161b3208832\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.728555 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92l6\" (UniqueName: \"kubernetes.io/projected/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-kube-api-access-v92l6\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.739911 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bed73333-26e4-481b-a2f6-4161b3208832-cert\") pod \"frr-k8s-webhook-server-64bf5d555-fzmv8\" (UID: \"bed73333-26e4-481b-a2f6-4161b3208832\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.744747 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsmt2\" (UniqueName: \"kubernetes.io/projected/8e4d5cef-564b-426e-9481-afef3c9f6b52-kube-api-access-xsmt2\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.755669 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plcw6\" (UniqueName: \"kubernetes.io/projected/bed73333-26e4-481b-a2f6-4161b3208832-kube-api-access-plcw6\") pod \"frr-k8s-webhook-server-64bf5d555-fzmv8\" (UID: \"bed73333-26e4-481b-a2f6-4161b3208832\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.830401 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-metrics-certs\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.830932 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqkj\" (UniqueName: \"kubernetes.io/projected/65acc9b9-f518-4f3a-857f-e86f6e453478-kube-api-access-5jqkj\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.830972 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92l6\" (UniqueName: \"kubernetes.io/projected/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-kube-api-access-v92l6\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.830999 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: E1008 13:32:03.831231 5065 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 08 13:32:03 crc kubenswrapper[5065]: E1008 13:32:03.831312 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist podName:65acc9b9-f518-4f3a-857f-e86f6e453478 nodeName:}" failed. No retries permitted until 2025-10-08 13:32:04.331294147 +0000 UTC m=+826.108675904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist") pod "speaker-fqqvb" (UID: "65acc9b9-f518-4f3a-857f-e86f6e453478") : secret "metallb-memberlist" not found Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.831437 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-metrics-certs\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.831775 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65acc9b9-f518-4f3a-857f-e86f6e453478-metallb-excludel2\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.831815 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-cert\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.832482 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65acc9b9-f518-4f3a-857f-e86f6e453478-metallb-excludel2\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.834334 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-metrics-certs\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.835053 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-metrics-certs\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.848071 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-cert\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.850701 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqkj\" (UniqueName: \"kubernetes.io/projected/65acc9b9-f518-4f3a-857f-e86f6e453478-kube-api-access-5jqkj\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.856658 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92l6\" (UniqueName: \"kubernetes.io/projected/7ccfc78e-a0ef-4111-bb6a-a09629363cc1-kube-api-access-v92l6\") pod \"controller-68d546b9d8-f288k\" (UID: \"7ccfc78e-a0ef-4111-bb6a-a09629363cc1\") " pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.859528 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:03 crc kubenswrapper[5065]: I1008 13:32:03.968498 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.240103 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics-certs\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.246705 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e4d5cef-564b-426e-9481-afef3c9f6b52-metrics-certs\") pod \"frr-k8s-jkbrg\" (UID: \"8e4d5cef-564b-426e-9481-afef3c9f6b52\") " pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.282342 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-f288k"] Oct 08 13:32:04 crc kubenswrapper[5065]: W1008 13:32:04.290120 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ccfc78e_a0ef_4111_bb6a_a09629363cc1.slice/crio-f750fbf9cd7f9eb83483b9b60ac6186ec15020d1f724b0b7408dc53b1e819c14 WatchSource:0}: Error finding container f750fbf9cd7f9eb83483b9b60ac6186ec15020d1f724b0b7408dc53b1e819c14: Status 404 returned error can't find the container with id f750fbf9cd7f9eb83483b9b60ac6186ec15020d1f724b0b7408dc53b1e819c14 Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.341602 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:04 crc kubenswrapper[5065]: E1008 13:32:04.341798 5065 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 08 13:32:04 crc kubenswrapper[5065]: E1008 13:32:04.341889 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist podName:65acc9b9-f518-4f3a-857f-e86f6e453478 nodeName:}" failed. No retries permitted until 2025-10-08 13:32:05.341868968 +0000 UTC m=+827.119250725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist") pod "speaker-fqqvb" (UID: "65acc9b9-f518-4f3a-857f-e86f6e453478") : secret "metallb-memberlist" not found Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.392981 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8"] Oct 08 13:32:04 crc kubenswrapper[5065]: W1008 13:32:04.404079 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed73333_26e4_481b_a2f6_4161b3208832.slice/crio-7c88420b620f3ee2441c79223d0ad4b6e7b3f7adc90e30ed31f361146984b749 WatchSource:0}: Error finding container 7c88420b620f3ee2441c79223d0ad4b6e7b3f7adc90e30ed31f361146984b749: Status 404 returned error can't find the container with id 7c88420b620f3ee2441c79223d0ad4b6e7b3f7adc90e30ed31f361146984b749 Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.442258 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" event={"ID":"bed73333-26e4-481b-a2f6-4161b3208832","Type":"ContainerStarted","Data":"7c88420b620f3ee2441c79223d0ad4b6e7b3f7adc90e30ed31f361146984b749"} Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.442749 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.443844 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-f288k" event={"ID":"7ccfc78e-a0ef-4111-bb6a-a09629363cc1","Type":"ContainerStarted","Data":"e4b3f54324db0c462edee1ea9bbdb151c663457f2108b068b5eb2e61f355e167"} Oct 08 13:32:04 crc kubenswrapper[5065]: I1008 13:32:04.443878 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-f288k" event={"ID":"7ccfc78e-a0ef-4111-bb6a-a09629363cc1","Type":"ContainerStarted","Data":"f750fbf9cd7f9eb83483b9b60ac6186ec15020d1f724b0b7408dc53b1e819c14"} Oct 08 13:32:05 crc kubenswrapper[5065]: I1008 13:32:05.353086 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:05 crc kubenswrapper[5065]: I1008 13:32:05.362534 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65acc9b9-f518-4f3a-857f-e86f6e453478-memberlist\") pod \"speaker-fqqvb\" (UID: \"65acc9b9-f518-4f3a-857f-e86f6e453478\") " pod="metallb-system/speaker-fqqvb" Oct 08 13:32:05 crc kubenswrapper[5065]: I1008 13:32:05.435612 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fqqvb" Oct 08 13:32:05 crc kubenswrapper[5065]: I1008 13:32:05.451258 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerStarted","Data":"1959e6d20f2fa805bb7e06ef6a85e40e7ba8b7417756191897e0c5a0ddc19e04"} Oct 08 13:32:05 crc kubenswrapper[5065]: I1008 13:32:05.453352 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-f288k" event={"ID":"7ccfc78e-a0ef-4111-bb6a-a09629363cc1","Type":"ContainerStarted","Data":"6e1a0d9b24e65f0eb2ec88a5e4e92d43cff3d13f0aa43077c67428341bf92c5b"} Oct 08 13:32:05 crc kubenswrapper[5065]: I1008 13:32:05.454633 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:05 crc kubenswrapper[5065]: W1008 13:32:05.463500 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65acc9b9_f518_4f3a_857f_e86f6e453478.slice/crio-6762c1376c81627c1061b69220f876d787b44be99c7b1b22348e4da7b46ff7cd WatchSource:0}: Error finding container 6762c1376c81627c1061b69220f876d787b44be99c7b1b22348e4da7b46ff7cd: Status 404 returned error can't find the container with id 6762c1376c81627c1061b69220f876d787b44be99c7b1b22348e4da7b46ff7cd Oct 08 13:32:05 crc kubenswrapper[5065]: I1008 13:32:05.477748 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-f288k" podStartSLOduration=2.477724861 podStartE2EDuration="2.477724861s" podCreationTimestamp="2025-10-08 13:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:32:05.475871569 +0000 UTC m=+827.253253336" watchObservedRunningTime="2025-10-08 13:32:05.477724861 +0000 UTC m=+827.255106638" Oct 08 13:32:06 crc kubenswrapper[5065]: I1008 13:32:06.458787 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fqqvb" event={"ID":"65acc9b9-f518-4f3a-857f-e86f6e453478","Type":"ContainerStarted","Data":"6762c1376c81627c1061b69220f876d787b44be99c7b1b22348e4da7b46ff7cd"} Oct 08 13:32:07 crc kubenswrapper[5065]: I1008 13:32:07.467373 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fqqvb" event={"ID":"65acc9b9-f518-4f3a-857f-e86f6e453478","Type":"ContainerStarted","Data":"0d21fd11aa352e082685912aeaf941cce6866093a34e1cb84c4295ffdbacef78"} Oct 08 13:32:07 crc kubenswrapper[5065]: I1008 13:32:07.467665 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fqqvb" event={"ID":"65acc9b9-f518-4f3a-857f-e86f6e453478","Type":"ContainerStarted","Data":"b0d7cb1418ea9ad3d6e3cd66debccef27bbc7e443b1cb4ccb3dff79bc0a25d78"} Oct 08 13:32:07 crc kubenswrapper[5065]: I1008 13:32:07.486433 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fqqvb" podStartSLOduration=4.486391497 podStartE2EDuration="4.486391497s" podCreationTimestamp="2025-10-08 13:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:32:07.482397185 +0000 UTC m=+829.259778952" watchObservedRunningTime="2025-10-08 13:32:07.486391497 +0000 UTC m=+829.263773264" Oct 08 13:32:08 crc kubenswrapper[5065]: I1008 13:32:08.486249 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fqqvb" Oct 08 13:32:12 crc kubenswrapper[5065]: I1008 13:32:12.509211 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" event={"ID":"bed73333-26e4-481b-a2f6-4161b3208832","Type":"ContainerStarted","Data":"2ab3ac3618b96a118ca0b53b842b2012c71eff5ef539b611241c86f46e712dcc"} Oct 08 13:32:12 crc kubenswrapper[5065]: I1008 13:32:12.509741 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:12 crc kubenswrapper[5065]: I1008 13:32:12.512000 5065 generic.go:334] "Generic (PLEG): container finished" podID="8e4d5cef-564b-426e-9481-afef3c9f6b52" containerID="49c0fcbdeb49fd389b9a2bb075663b6cd7f7735bb1fed5472e9596d932698128" exitCode=0 Oct 08 13:32:12 crc kubenswrapper[5065]: I1008 13:32:12.512053 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerDied","Data":"49c0fcbdeb49fd389b9a2bb075663b6cd7f7735bb1fed5472e9596d932698128"} Oct 08 13:32:12 crc kubenswrapper[5065]: I1008 13:32:12.529970 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" podStartSLOduration=1.8750460690000001 podStartE2EDuration="9.52995475s" podCreationTimestamp="2025-10-08 13:32:03 +0000 UTC" firstStartedPulling="2025-10-08 13:32:04.406301887 +0000 UTC m=+826.183683654" lastFinishedPulling="2025-10-08 13:32:12.061210578 +0000 UTC m=+833.838592335" observedRunningTime="2025-10-08 13:32:12.52851037 +0000 UTC m=+834.305892147" watchObservedRunningTime="2025-10-08 13:32:12.52995475 +0000 UTC m=+834.307336497" Oct 08 13:32:13 crc kubenswrapper[5065]: I1008 13:32:13.523567 5065 generic.go:334] "Generic (PLEG): container finished" podID="8e4d5cef-564b-426e-9481-afef3c9f6b52" containerID="86386b58a65ce9be96682640efe25a73a00c3d48bb511613162817cf00e5695e" exitCode=0 Oct 08 13:32:13 crc kubenswrapper[5065]: I1008 13:32:13.523663 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerDied","Data":"86386b58a65ce9be96682640efe25a73a00c3d48bb511613162817cf00e5695e"} Oct 08 13:32:14 crc kubenswrapper[5065]: I1008 13:32:14.530318 5065 generic.go:334] "Generic (PLEG): container finished" podID="8e4d5cef-564b-426e-9481-afef3c9f6b52" containerID="2b21137f2059ebdf256ad6f7c4dd675f78ee8afe6197fa1a10d153365b2e8184" exitCode=0 Oct 08 13:32:14 crc kubenswrapper[5065]: I1008 13:32:14.530377 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerDied","Data":"2b21137f2059ebdf256ad6f7c4dd675f78ee8afe6197fa1a10d153365b2e8184"} Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.540309 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerStarted","Data":"a7a109225095d989470de4a1b72f8890842a9b46ca555eb625493f1184bbc66a"} Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.541650 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.541796 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerStarted","Data":"3668ff83fac59fd7afa4283b08d2436a3bb1820397bd7354a9ab1724ca9626ed"} Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.541918 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerStarted","Data":"654fa74c3bdb70b859f99990eb392122aeb545b5008bb023feb1b14215acb806"} Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.542048 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerStarted","Data":"a44aa22f451d0f2e50ef65d519dcbc50eeb501e5e1b322159b495ea1b14c5f56"} Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.542253 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerStarted","Data":"5ebecb4946442762ec481d5794e8a8767fc024b55ea729649a91019f27467bd2"} Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.542383 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jkbrg" event={"ID":"8e4d5cef-564b-426e-9481-afef3c9f6b52","Type":"ContainerStarted","Data":"abe82b5e39d6dea6297f20a7245311dce232840ce42839b840746cca494a4875"} Oct 08 13:32:15 crc kubenswrapper[5065]: I1008 13:32:15.569767 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jkbrg" podStartSLOduration=5.079250055 podStartE2EDuration="12.569746807s" podCreationTimestamp="2025-10-08 13:32:03 +0000 UTC" firstStartedPulling="2025-10-08 13:32:04.552139487 +0000 UTC m=+826.329521244" lastFinishedPulling="2025-10-08 13:32:12.042636229 +0000 UTC m=+833.820017996" observedRunningTime="2025-10-08 13:32:15.568090421 +0000 UTC m=+837.345472218" watchObservedRunningTime="2025-10-08 13:32:15.569746807 +0000 UTC m=+837.347128584" Oct 08 13:32:19 crc kubenswrapper[5065]: I1008 13:32:19.443272 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:19 crc kubenswrapper[5065]: I1008 13:32:19.508731 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:23 crc kubenswrapper[5065]: I1008 13:32:23.864517 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-fzmv8" Oct 08 13:32:23 crc kubenswrapper[5065]: I1008 13:32:23.972172 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-f288k" Oct 08 13:32:24 crc kubenswrapper[5065]: I1008 13:32:24.445978 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jkbrg" Oct 08 13:32:25 crc kubenswrapper[5065]: I1008 13:32:25.439285 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fqqvb" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.793776 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2"] Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.795063 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.797200 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.804100 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2"] Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.845852 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.845903 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.845922 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4nm9\" (UniqueName: \"kubernetes.io/projected/75792fb1-86a8-4681-b318-ddce6b276cac-kube-api-access-t4nm9\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.947500 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.947543 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4nm9\" (UniqueName: \"kubernetes.io/projected/75792fb1-86a8-4681-b318-ddce6b276cac-kube-api-access-t4nm9\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.947639 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.948379 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.948459 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:26 crc kubenswrapper[5065]: I1008 13:32:26.966705 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4nm9\" (UniqueName: \"kubernetes.io/projected/75792fb1-86a8-4681-b318-ddce6b276cac-kube-api-access-t4nm9\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:27 crc kubenswrapper[5065]: I1008 13:32:27.129246 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:27 crc kubenswrapper[5065]: I1008 13:32:27.587014 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2"] Oct 08 13:32:27 crc kubenswrapper[5065]: W1008 13:32:27.596564 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75792fb1_86a8_4681_b318_ddce6b276cac.slice/crio-f31264a69fd477f753f2f0331db763d9099c01375bd2944be314177ae616f40a WatchSource:0}: Error finding container f31264a69fd477f753f2f0331db763d9099c01375bd2944be314177ae616f40a: Status 404 returned error can't find the container with id f31264a69fd477f753f2f0331db763d9099c01375bd2944be314177ae616f40a Oct 08 13:32:27 crc kubenswrapper[5065]: I1008 13:32:27.616653 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" event={"ID":"75792fb1-86a8-4681-b318-ddce6b276cac","Type":"ContainerStarted","Data":"f31264a69fd477f753f2f0331db763d9099c01375bd2944be314177ae616f40a"} Oct 08 13:32:28 crc kubenswrapper[5065]: I1008 13:32:28.624712 5065 generic.go:334] "Generic (PLEG): container finished" podID="75792fb1-86a8-4681-b318-ddce6b276cac" containerID="696961b2dfe4ff32ef330c93987e496c4834b9b91402f120a5ddb30039999efc" exitCode=0 Oct 08 13:32:28 crc kubenswrapper[5065]: I1008 13:32:28.624761 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" event={"ID":"75792fb1-86a8-4681-b318-ddce6b276cac","Type":"ContainerDied","Data":"696961b2dfe4ff32ef330c93987e496c4834b9b91402f120a5ddb30039999efc"} Oct 08 13:32:32 crc kubenswrapper[5065]: I1008 13:32:32.649305 5065 generic.go:334] "Generic (PLEG): container finished" podID="75792fb1-86a8-4681-b318-ddce6b276cac" containerID="56b2d39f4eadfe27628d5b02d803f974cdd21dfdccf0747ba050cf93433f8828" exitCode=0 Oct 08 13:32:32 crc kubenswrapper[5065]: I1008 13:32:32.649372 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" event={"ID":"75792fb1-86a8-4681-b318-ddce6b276cac","Type":"ContainerDied","Data":"56b2d39f4eadfe27628d5b02d803f974cdd21dfdccf0747ba050cf93433f8828"} Oct 08 13:32:33 crc kubenswrapper[5065]: I1008 13:32:33.658488 5065 generic.go:334] "Generic (PLEG): container finished" podID="75792fb1-86a8-4681-b318-ddce6b276cac" containerID="e404a36a9cb1285d468292ac7cbf7b9aef6a2174c8c03281bb824b020cf12ee8" exitCode=0 Oct 08 13:32:33 crc kubenswrapper[5065]: I1008 13:32:33.658601 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" event={"ID":"75792fb1-86a8-4681-b318-ddce6b276cac","Type":"ContainerDied","Data":"e404a36a9cb1285d468292ac7cbf7b9aef6a2174c8c03281bb824b020cf12ee8"} Oct 08 13:32:34 crc kubenswrapper[5065]: I1008 13:32:34.945750 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.050803 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-bundle\") pod \"75792fb1-86a8-4681-b318-ddce6b276cac\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.050963 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-util\") pod \"75792fb1-86a8-4681-b318-ddce6b276cac\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.051028 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4nm9\" (UniqueName: \"kubernetes.io/projected/75792fb1-86a8-4681-b318-ddce6b276cac-kube-api-access-t4nm9\") pod \"75792fb1-86a8-4681-b318-ddce6b276cac\" (UID: \"75792fb1-86a8-4681-b318-ddce6b276cac\") " Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.051689 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-bundle" (OuterVolumeSpecName: "bundle") pod "75792fb1-86a8-4681-b318-ddce6b276cac" (UID: "75792fb1-86a8-4681-b318-ddce6b276cac"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.056826 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75792fb1-86a8-4681-b318-ddce6b276cac-kube-api-access-t4nm9" (OuterVolumeSpecName: "kube-api-access-t4nm9") pod "75792fb1-86a8-4681-b318-ddce6b276cac" (UID: "75792fb1-86a8-4681-b318-ddce6b276cac"). InnerVolumeSpecName "kube-api-access-t4nm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.069104 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-util" (OuterVolumeSpecName: "util") pod "75792fb1-86a8-4681-b318-ddce6b276cac" (UID: "75792fb1-86a8-4681-b318-ddce6b276cac"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.153116 5065 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.153511 5065 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75792fb1-86a8-4681-b318-ddce6b276cac-util\") on node \"crc\" DevicePath \"\"" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.153662 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4nm9\" (UniqueName: \"kubernetes.io/projected/75792fb1-86a8-4681-b318-ddce6b276cac-kube-api-access-t4nm9\") on node \"crc\" DevicePath \"\"" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.675091 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" event={"ID":"75792fb1-86a8-4681-b318-ddce6b276cac","Type":"ContainerDied","Data":"f31264a69fd477f753f2f0331db763d9099c01375bd2944be314177ae616f40a"} Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.675154 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31264a69fd477f753f2f0331db763d9099c01375bd2944be314177ae616f40a" Oct 08 13:32:35 crc kubenswrapper[5065]: I1008 13:32:35.675159 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.453930 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb"] Oct 08 13:32:40 crc kubenswrapper[5065]: E1008 13:32:40.454761 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75792fb1-86a8-4681-b318-ddce6b276cac" containerName="pull" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.454777 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="75792fb1-86a8-4681-b318-ddce6b276cac" containerName="pull" Oct 08 13:32:40 crc kubenswrapper[5065]: E1008 13:32:40.454795 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75792fb1-86a8-4681-b318-ddce6b276cac" containerName="util" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.454806 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="75792fb1-86a8-4681-b318-ddce6b276cac" containerName="util" Oct 08 13:32:40 crc kubenswrapper[5065]: E1008 13:32:40.454818 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75792fb1-86a8-4681-b318-ddce6b276cac" containerName="extract" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.454826 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="75792fb1-86a8-4681-b318-ddce6b276cac" containerName="extract" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.454962 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="75792fb1-86a8-4681-b318-ddce6b276cac" containerName="extract" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.455471 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.460823 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.461020 5065 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8lsqk" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.461162 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.498494 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb"] Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.521505 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6hkt\" (UniqueName: \"kubernetes.io/projected/eb9e2727-86e8-462a-887c-1b1b4fbf4c2d-kube-api-access-s6hkt\") pod \"cert-manager-operator-controller-manager-57cd46d6d-lbctb\" (UID: \"eb9e2727-86e8-462a-887c-1b1b4fbf4c2d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.623324 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6hkt\" (UniqueName: \"kubernetes.io/projected/eb9e2727-86e8-462a-887c-1b1b4fbf4c2d-kube-api-access-s6hkt\") pod \"cert-manager-operator-controller-manager-57cd46d6d-lbctb\" (UID: \"eb9e2727-86e8-462a-887c-1b1b4fbf4c2d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.647621 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6hkt\" (UniqueName: \"kubernetes.io/projected/eb9e2727-86e8-462a-887c-1b1b4fbf4c2d-kube-api-access-s6hkt\") pod \"cert-manager-operator-controller-manager-57cd46d6d-lbctb\" (UID: \"eb9e2727-86e8-462a-887c-1b1b4fbf4c2d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" Oct 08 13:32:40 crc kubenswrapper[5065]: I1008 13:32:40.813719 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" Oct 08 13:32:41 crc kubenswrapper[5065]: I1008 13:32:41.319383 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb"] Oct 08 13:32:41 crc kubenswrapper[5065]: W1008 13:32:41.335395 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb9e2727_86e8_462a_887c_1b1b4fbf4c2d.slice/crio-a864261784d507beb167fac9cfa8a2cac09b64a845090ed0863b03f9faa99eb4 WatchSource:0}: Error finding container a864261784d507beb167fac9cfa8a2cac09b64a845090ed0863b03f9faa99eb4: Status 404 returned error can't find the container with id a864261784d507beb167fac9cfa8a2cac09b64a845090ed0863b03f9faa99eb4 Oct 08 13:32:41 crc kubenswrapper[5065]: I1008 13:32:41.708840 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" event={"ID":"eb9e2727-86e8-462a-887c-1b1b4fbf4c2d","Type":"ContainerStarted","Data":"a864261784d507beb167fac9cfa8a2cac09b64a845090ed0863b03f9faa99eb4"} Oct 08 13:32:48 crc kubenswrapper[5065]: I1008 13:32:48.778308 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" event={"ID":"eb9e2727-86e8-462a-887c-1b1b4fbf4c2d","Type":"ContainerStarted","Data":"f955c9cacb5ec67ffe7b19b44c78cd3f3331c53fe96bff1f7d728ac8f1b3e4c1"} Oct 08 13:32:48 crc kubenswrapper[5065]: I1008 13:32:48.797031 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-lbctb" podStartSLOduration=2.214062358 podStartE2EDuration="8.797014869s" podCreationTimestamp="2025-10-08 13:32:40 +0000 UTC" firstStartedPulling="2025-10-08 13:32:41.338059012 +0000 UTC m=+863.115440769" lastFinishedPulling="2025-10-08 13:32:47.921011523 +0000 UTC m=+869.698393280" observedRunningTime="2025-10-08 13:32:48.792691591 +0000 UTC m=+870.570073348" watchObservedRunningTime="2025-10-08 13:32:48.797014869 +0000 UTC m=+870.574396626" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.213953 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-w2vbv"] Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.214887 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.217221 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.217409 5065 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rmws2" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.217797 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.221810 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-w2vbv"] Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.276872 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb7a3520-d62f-4b03-9d1a-f96731b4cb35-bound-sa-token\") pod \"cert-manager-webhook-d969966f-w2vbv\" (UID: \"fb7a3520-d62f-4b03-9d1a-f96731b4cb35\") " pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.276942 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xngbs\" (UniqueName: \"kubernetes.io/projected/fb7a3520-d62f-4b03-9d1a-f96731b4cb35-kube-api-access-xngbs\") pod \"cert-manager-webhook-d969966f-w2vbv\" (UID: \"fb7a3520-d62f-4b03-9d1a-f96731b4cb35\") " pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.377733 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb7a3520-d62f-4b03-9d1a-f96731b4cb35-bound-sa-token\") pod \"cert-manager-webhook-d969966f-w2vbv\" (UID: \"fb7a3520-d62f-4b03-9d1a-f96731b4cb35\") " pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.377788 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xngbs\" (UniqueName: \"kubernetes.io/projected/fb7a3520-d62f-4b03-9d1a-f96731b4cb35-kube-api-access-xngbs\") pod \"cert-manager-webhook-d969966f-w2vbv\" (UID: \"fb7a3520-d62f-4b03-9d1a-f96731b4cb35\") " pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.396030 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb7a3520-d62f-4b03-9d1a-f96731b4cb35-bound-sa-token\") pod \"cert-manager-webhook-d969966f-w2vbv\" (UID: \"fb7a3520-d62f-4b03-9d1a-f96731b4cb35\") " pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.396472 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xngbs\" (UniqueName: \"kubernetes.io/projected/fb7a3520-d62f-4b03-9d1a-f96731b4cb35-kube-api-access-xngbs\") pod \"cert-manager-webhook-d969966f-w2vbv\" (UID: \"fb7a3520-d62f-4b03-9d1a-f96731b4cb35\") " pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.535301 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:52 crc kubenswrapper[5065]: I1008 13:32:52.984748 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-w2vbv"] Oct 08 13:32:53 crc kubenswrapper[5065]: I1008 13:32:53.810403 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" event={"ID":"fb7a3520-d62f-4b03-9d1a-f96731b4cb35","Type":"ContainerStarted","Data":"7bd6f9d5548ed2ba40f8e40ce0ae879014851d812c1ba50296969143cc6207b9"} Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.374924 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.375229 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.791552 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-vl996"] Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.792376 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:54 crc kubenswrapper[5065]: W1008 13:32:54.795248 5065 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-gznq5": failed to list *v1.Secret: secrets "cert-manager-cainjector-dockercfg-gznq5" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Oct 08 13:32:54 crc kubenswrapper[5065]: E1008 13:32:54.795297 5065 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-gznq5\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-cainjector-dockercfg-gznq5\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.804763 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gr5g\" (UniqueName: \"kubernetes.io/projected/018f649c-4363-4d87-82b8-e1f93c39d71f-kube-api-access-6gr5g\") pod \"cert-manager-cainjector-7d9f95dbf-vl996\" (UID: \"018f649c-4363-4d87-82b8-e1f93c39d71f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.805077 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/018f649c-4363-4d87-82b8-e1f93c39d71f-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-vl996\" (UID: \"018f649c-4363-4d87-82b8-e1f93c39d71f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.806551 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-vl996"] Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.906450 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/018f649c-4363-4d87-82b8-e1f93c39d71f-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-vl996\" (UID: \"018f649c-4363-4d87-82b8-e1f93c39d71f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.906538 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gr5g\" (UniqueName: \"kubernetes.io/projected/018f649c-4363-4d87-82b8-e1f93c39d71f-kube-api-access-6gr5g\") pod \"cert-manager-cainjector-7d9f95dbf-vl996\" (UID: \"018f649c-4363-4d87-82b8-e1f93c39d71f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.927570 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gr5g\" (UniqueName: \"kubernetes.io/projected/018f649c-4363-4d87-82b8-e1f93c39d71f-kube-api-access-6gr5g\") pod \"cert-manager-cainjector-7d9f95dbf-vl996\" (UID: \"018f649c-4363-4d87-82b8-e1f93c39d71f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:54 crc kubenswrapper[5065]: I1008 13:32:54.943639 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/018f649c-4363-4d87-82b8-e1f93c39d71f-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-vl996\" (UID: \"018f649c-4363-4d87-82b8-e1f93c39d71f\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:55 crc kubenswrapper[5065]: I1008 13:32:55.885882 5065 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gznq5" Oct 08 13:32:55 crc kubenswrapper[5065]: I1008 13:32:55.891336 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" Oct 08 13:32:57 crc kubenswrapper[5065]: I1008 13:32:57.236395 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-vl996"] Oct 08 13:32:57 crc kubenswrapper[5065]: I1008 13:32:57.842279 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" event={"ID":"fb7a3520-d62f-4b03-9d1a-f96731b4cb35","Type":"ContainerStarted","Data":"e91e244a9ecd8e0486b561e836db2e29c08b1b5536b62fe192d33644a79148b3"} Oct 08 13:32:57 crc kubenswrapper[5065]: I1008 13:32:57.842703 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:32:57 crc kubenswrapper[5065]: I1008 13:32:57.844177 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" event={"ID":"018f649c-4363-4d87-82b8-e1f93c39d71f","Type":"ContainerStarted","Data":"5afa06551b4486a7919b1e917821c018cf03d005b6d97bb0f51a95c98554c5c2"} Oct 08 13:32:57 crc kubenswrapper[5065]: I1008 13:32:57.844210 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" event={"ID":"018f649c-4363-4d87-82b8-e1f93c39d71f","Type":"ContainerStarted","Data":"62603e5ebe4c3f4495417e352a0f1d971a0a16d9ed72b6d44986fd95265900d3"} Oct 08 13:32:57 crc kubenswrapper[5065]: I1008 13:32:57.860301 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" podStartSLOduration=2.022756244 podStartE2EDuration="5.860280801s" podCreationTimestamp="2025-10-08 13:32:52 +0000 UTC" firstStartedPulling="2025-10-08 13:32:52.984609557 +0000 UTC m=+874.761991314" lastFinishedPulling="2025-10-08 13:32:56.822134114 +0000 UTC m=+878.599515871" observedRunningTime="2025-10-08 13:32:57.856520249 +0000 UTC m=+879.633902006" watchObservedRunningTime="2025-10-08 13:32:57.860280801 +0000 UTC m=+879.637662558" Oct 08 13:32:57 crc kubenswrapper[5065]: I1008 13:32:57.871338 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-vl996" podStartSLOduration=3.871319091 podStartE2EDuration="3.871319091s" podCreationTimestamp="2025-10-08 13:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:32:57.869881912 +0000 UTC m=+879.647263669" watchObservedRunningTime="2025-10-08 13:32:57.871319091 +0000 UTC m=+879.648700848" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.177670 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-7vwzk"] Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.179347 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.182686 5065 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2xzzw" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.184542 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-7vwzk"] Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.211957 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b31a82f5-8cf6-4b86-afc0-281fb7b63aa6-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-7vwzk\" (UID: \"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6\") " pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.212032 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczlq\" (UniqueName: \"kubernetes.io/projected/b31a82f5-8cf6-4b86-afc0-281fb7b63aa6-kube-api-access-fczlq\") pod \"cert-manager-7d4cc89fcb-7vwzk\" (UID: \"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6\") " pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.313335 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b31a82f5-8cf6-4b86-afc0-281fb7b63aa6-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-7vwzk\" (UID: \"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6\") " pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.313441 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczlq\" (UniqueName: \"kubernetes.io/projected/b31a82f5-8cf6-4b86-afc0-281fb7b63aa6-kube-api-access-fczlq\") pod \"cert-manager-7d4cc89fcb-7vwzk\" (UID: \"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6\") " pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.337034 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczlq\" (UniqueName: \"kubernetes.io/projected/b31a82f5-8cf6-4b86-afc0-281fb7b63aa6-kube-api-access-fczlq\") pod \"cert-manager-7d4cc89fcb-7vwzk\" (UID: \"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6\") " pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.337461 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b31a82f5-8cf6-4b86-afc0-281fb7b63aa6-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-7vwzk\" (UID: \"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6\") " pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.507808 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.538323 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-d969966f-w2vbv" Oct 08 13:33:02 crc kubenswrapper[5065]: I1008 13:33:02.932953 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-7vwzk"] Oct 08 13:33:02 crc kubenswrapper[5065]: W1008 13:33:02.938677 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb31a82f5_8cf6_4b86_afc0_281fb7b63aa6.slice/crio-6b54bc64ae1c34efe3a6fb794bfcb7e7a944d7e0c602faeb880704b372dbe8a0 WatchSource:0}: Error finding container 6b54bc64ae1c34efe3a6fb794bfcb7e7a944d7e0c602faeb880704b372dbe8a0: Status 404 returned error can't find the container with id 6b54bc64ae1c34efe3a6fb794bfcb7e7a944d7e0c602faeb880704b372dbe8a0 Oct 08 13:33:03 crc kubenswrapper[5065]: I1008 13:33:03.881168 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" event={"ID":"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6","Type":"ContainerStarted","Data":"bac69c56fc30680156c5e8c422e2ef8b69f2af6fb55ced68c2d5e97c80ed4087"} Oct 08 13:33:03 crc kubenswrapper[5065]: I1008 13:33:03.881470 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" event={"ID":"b31a82f5-8cf6-4b86-afc0-281fb7b63aa6","Type":"ContainerStarted","Data":"6b54bc64ae1c34efe3a6fb794bfcb7e7a944d7e0c602faeb880704b372dbe8a0"} Oct 08 13:33:03 crc kubenswrapper[5065]: I1008 13:33:03.902129 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-7d4cc89fcb-7vwzk" podStartSLOduration=1.9021056440000002 podStartE2EDuration="1.902105644s" podCreationTimestamp="2025-10-08 13:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:33:03.9004927 +0000 UTC m=+885.677874457" watchObservedRunningTime="2025-10-08 13:33:03.902105644 +0000 UTC m=+885.679487421" Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.835101 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pkhd7"] Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.837103 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkhd7" Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.845465 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8g6dk" Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.845482 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.845506 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.853865 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkhd7"] Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.895132 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7sn7\" (UniqueName: \"kubernetes.io/projected/e2d1e722-5975-4d80-83fc-3da62341a00d-kube-api-access-w7sn7\") pod \"openstack-operator-index-pkhd7\" (UID: \"e2d1e722-5975-4d80-83fc-3da62341a00d\") " pod="openstack-operators/openstack-operator-index-pkhd7" Oct 08 13:33:06 crc kubenswrapper[5065]: I1008 13:33:06.996172 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7sn7\" (UniqueName: \"kubernetes.io/projected/e2d1e722-5975-4d80-83fc-3da62341a00d-kube-api-access-w7sn7\") pod \"openstack-operator-index-pkhd7\" (UID: \"e2d1e722-5975-4d80-83fc-3da62341a00d\") " pod="openstack-operators/openstack-operator-index-pkhd7" Oct 08 13:33:07 crc kubenswrapper[5065]: I1008 13:33:07.016689 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7sn7\" (UniqueName: \"kubernetes.io/projected/e2d1e722-5975-4d80-83fc-3da62341a00d-kube-api-access-w7sn7\") pod \"openstack-operator-index-pkhd7\" (UID: \"e2d1e722-5975-4d80-83fc-3da62341a00d\") " pod="openstack-operators/openstack-operator-index-pkhd7" Oct 08 13:33:07 crc kubenswrapper[5065]: I1008 13:33:07.169844 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkhd7" Oct 08 13:33:07 crc kubenswrapper[5065]: I1008 13:33:07.586255 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkhd7"] Oct 08 13:33:07 crc kubenswrapper[5065]: W1008 13:33:07.589507 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d1e722_5975_4d80_83fc_3da62341a00d.slice/crio-4095030b6a083fcc1ce97009251ccf724ba4daf68b551517b616569282507550 WatchSource:0}: Error finding container 4095030b6a083fcc1ce97009251ccf724ba4daf68b551517b616569282507550: Status 404 returned error can't find the container with id 4095030b6a083fcc1ce97009251ccf724ba4daf68b551517b616569282507550 Oct 08 13:33:07 crc kubenswrapper[5065]: I1008 13:33:07.905291 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkhd7" event={"ID":"e2d1e722-5975-4d80-83fc-3da62341a00d","Type":"ContainerStarted","Data":"4095030b6a083fcc1ce97009251ccf724ba4daf68b551517b616569282507550"} Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.020287 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkhd7"] Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.634579 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x8t65"] Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.635868 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.646867 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x8t65"] Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.734625 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzsnc\" (UniqueName: \"kubernetes.io/projected/a3797803-3939-430e-a68b-3861cc14e098-kube-api-access-jzsnc\") pod \"openstack-operator-index-x8t65\" (UID: \"a3797803-3939-430e-a68b-3861cc14e098\") " pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.835684 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzsnc\" (UniqueName: \"kubernetes.io/projected/a3797803-3939-430e-a68b-3861cc14e098-kube-api-access-jzsnc\") pod \"openstack-operator-index-x8t65\" (UID: \"a3797803-3939-430e-a68b-3861cc14e098\") " pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.859680 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzsnc\" (UniqueName: \"kubernetes.io/projected/a3797803-3939-430e-a68b-3861cc14e098-kube-api-access-jzsnc\") pod \"openstack-operator-index-x8t65\" (UID: \"a3797803-3939-430e-a68b-3861cc14e098\") " pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:09 crc kubenswrapper[5065]: I1008 13:33:09.996736 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:10 crc kubenswrapper[5065]: I1008 13:33:10.393743 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x8t65"] Oct 08 13:33:10 crc kubenswrapper[5065]: W1008 13:33:10.408239 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3797803_3939_430e_a68b_3861cc14e098.slice/crio-7c985ceae196872bf0012dbf2306231bde962f173df472f0912ef75e4fc8e1d7 WatchSource:0}: Error finding container 7c985ceae196872bf0012dbf2306231bde962f173df472f0912ef75e4fc8e1d7: Status 404 returned error can't find the container with id 7c985ceae196872bf0012dbf2306231bde962f173df472f0912ef75e4fc8e1d7 Oct 08 13:33:10 crc kubenswrapper[5065]: I1008 13:33:10.926645 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x8t65" event={"ID":"a3797803-3939-430e-a68b-3861cc14e098","Type":"ContainerStarted","Data":"7c985ceae196872bf0012dbf2306231bde962f173df472f0912ef75e4fc8e1d7"} Oct 08 13:33:10 crc kubenswrapper[5065]: I1008 13:33:10.928726 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkhd7" event={"ID":"e2d1e722-5975-4d80-83fc-3da62341a00d","Type":"ContainerStarted","Data":"6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298"} Oct 08 13:33:10 crc kubenswrapper[5065]: I1008 13:33:10.928887 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pkhd7" podUID="e2d1e722-5975-4d80-83fc-3da62341a00d" containerName="registry-server" containerID="cri-o://6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298" gracePeriod=2 Oct 08 13:33:10 crc kubenswrapper[5065]: I1008 13:33:10.946780 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pkhd7" podStartSLOduration=1.901189965 podStartE2EDuration="4.946762602s" podCreationTimestamp="2025-10-08 13:33:06 +0000 UTC" firstStartedPulling="2025-10-08 13:33:07.592309254 +0000 UTC m=+889.369691031" lastFinishedPulling="2025-10-08 13:33:10.637881901 +0000 UTC m=+892.415263668" observedRunningTime="2025-10-08 13:33:10.944595733 +0000 UTC m=+892.721977500" watchObservedRunningTime="2025-10-08 13:33:10.946762602 +0000 UTC m=+892.724144369" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.306021 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkhd7" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.356112 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7sn7\" (UniqueName: \"kubernetes.io/projected/e2d1e722-5975-4d80-83fc-3da62341a00d-kube-api-access-w7sn7\") pod \"e2d1e722-5975-4d80-83fc-3da62341a00d\" (UID: \"e2d1e722-5975-4d80-83fc-3da62341a00d\") " Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.361059 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d1e722-5975-4d80-83fc-3da62341a00d-kube-api-access-w7sn7" (OuterVolumeSpecName: "kube-api-access-w7sn7") pod "e2d1e722-5975-4d80-83fc-3da62341a00d" (UID: "e2d1e722-5975-4d80-83fc-3da62341a00d"). InnerVolumeSpecName "kube-api-access-w7sn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.457614 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7sn7\" (UniqueName: \"kubernetes.io/projected/e2d1e722-5975-4d80-83fc-3da62341a00d-kube-api-access-w7sn7\") on node \"crc\" DevicePath \"\"" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.937596 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x8t65" event={"ID":"a3797803-3939-430e-a68b-3861cc14e098","Type":"ContainerStarted","Data":"f414edddd9a0714be0e5584df674396e83ee271482f33e78c6cdaef5f8a04d4a"} Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.940072 5065 generic.go:334] "Generic (PLEG): container finished" podID="e2d1e722-5975-4d80-83fc-3da62341a00d" containerID="6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298" exitCode=0 Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.940124 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkhd7" event={"ID":"e2d1e722-5975-4d80-83fc-3da62341a00d","Type":"ContainerDied","Data":"6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298"} Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.940163 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkhd7" event={"ID":"e2d1e722-5975-4d80-83fc-3da62341a00d","Type":"ContainerDied","Data":"4095030b6a083fcc1ce97009251ccf724ba4daf68b551517b616569282507550"} Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.940184 5065 scope.go:117] "RemoveContainer" containerID="6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.940284 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkhd7" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.962581 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x8t65" podStartSLOduration=2.355654725 podStartE2EDuration="2.962561742s" podCreationTimestamp="2025-10-08 13:33:09 +0000 UTC" firstStartedPulling="2025-10-08 13:33:10.41026484 +0000 UTC m=+892.187646617" lastFinishedPulling="2025-10-08 13:33:11.017171877 +0000 UTC m=+892.794553634" observedRunningTime="2025-10-08 13:33:11.958951733 +0000 UTC m=+893.736333490" watchObservedRunningTime="2025-10-08 13:33:11.962561742 +0000 UTC m=+893.739943499" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.963855 5065 scope.go:117] "RemoveContainer" containerID="6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298" Oct 08 13:33:11 crc kubenswrapper[5065]: E1008 13:33:11.964854 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298\": container with ID starting with 6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298 not found: ID does not exist" containerID="6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.964921 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298"} err="failed to get container status \"6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298\": rpc error: code = NotFound desc = could not find container \"6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298\": container with ID starting with 6f86779323f3c54261ea44c46f87604401c35e06a3e6dc643f27013968659298 not found: ID does not exist" Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.985565 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkhd7"] Oct 08 13:33:11 crc kubenswrapper[5065]: I1008 13:33:11.990039 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pkhd7"] Oct 08 13:33:12 crc kubenswrapper[5065]: I1008 13:33:12.887745 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d1e722-5975-4d80-83fc-3da62341a00d" path="/var/lib/kubelet/pods/e2d1e722-5975-4d80-83fc-3da62341a00d/volumes" Oct 08 13:33:19 crc kubenswrapper[5065]: I1008 13:33:19.997727 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:19 crc kubenswrapper[5065]: I1008 13:33:19.998328 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:20 crc kubenswrapper[5065]: I1008 13:33:20.026006 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:21 crc kubenswrapper[5065]: I1008 13:33:21.044160 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x8t65" Oct 08 13:33:24 crc kubenswrapper[5065]: I1008 13:33:24.375925 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:33:24 crc kubenswrapper[5065]: I1008 13:33:24.376334 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.270143 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j"] Oct 08 13:33:28 crc kubenswrapper[5065]: E1008 13:33:28.270976 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d1e722-5975-4d80-83fc-3da62341a00d" containerName="registry-server" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.270999 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d1e722-5975-4d80-83fc-3da62341a00d" containerName="registry-server" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.271258 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d1e722-5975-4d80-83fc-3da62341a00d" containerName="registry-server" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.273007 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.280491 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wzwsv" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.284124 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j"] Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.373487 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-bundle\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.373587 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpnw\" (UniqueName: \"kubernetes.io/projected/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-kube-api-access-xfpnw\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.373610 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-util\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.475103 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpnw\" (UniqueName: \"kubernetes.io/projected/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-kube-api-access-xfpnw\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.475168 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-util\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.475263 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-bundle\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.475800 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-util\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.476063 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-bundle\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.504697 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpnw\" (UniqueName: \"kubernetes.io/projected/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-kube-api-access-xfpnw\") pod \"eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:28 crc kubenswrapper[5065]: I1008 13:33:28.602463 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:29 crc kubenswrapper[5065]: I1008 13:33:29.029895 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j"] Oct 08 13:33:29 crc kubenswrapper[5065]: I1008 13:33:29.076949 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" event={"ID":"4b2477a8-1c49-49e9-9ac3-4feb075a95a0","Type":"ContainerStarted","Data":"bdb7a0accb57bf8da6cecdb8f69d8a012a3d5c3bdfe71477e996c701bcb2c470"} Oct 08 13:33:30 crc kubenswrapper[5065]: I1008 13:33:30.087930 5065 generic.go:334] "Generic (PLEG): container finished" podID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerID="58b2ed2685b362b3a2b3d449f498481873608c2c130e800b9de50b70197810d8" exitCode=0 Oct 08 13:33:30 crc kubenswrapper[5065]: I1008 13:33:30.088001 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" event={"ID":"4b2477a8-1c49-49e9-9ac3-4feb075a95a0","Type":"ContainerDied","Data":"58b2ed2685b362b3a2b3d449f498481873608c2c130e800b9de50b70197810d8"} Oct 08 13:33:31 crc kubenswrapper[5065]: I1008 13:33:31.096778 5065 generic.go:334] "Generic (PLEG): container finished" podID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerID="ab96ccdb7901dfe51bed4c0ea4fdc891f23237a77b35260eec8159d2ad61d4d2" exitCode=0 Oct 08 13:33:31 crc kubenswrapper[5065]: I1008 13:33:31.096945 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" event={"ID":"4b2477a8-1c49-49e9-9ac3-4feb075a95a0","Type":"ContainerDied","Data":"ab96ccdb7901dfe51bed4c0ea4fdc891f23237a77b35260eec8159d2ad61d4d2"} Oct 08 13:33:32 crc kubenswrapper[5065]: I1008 13:33:32.105275 5065 generic.go:334] "Generic (PLEG): container finished" podID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerID="b1aa570f7dfe6f148f023530725ac04d5dda58b10732563c9d8b20c2a6fb1221" exitCode=0 Oct 08 13:33:32 crc kubenswrapper[5065]: I1008 13:33:32.105333 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" event={"ID":"4b2477a8-1c49-49e9-9ac3-4feb075a95a0","Type":"ContainerDied","Data":"b1aa570f7dfe6f148f023530725ac04d5dda58b10732563c9d8b20c2a6fb1221"} Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.390362 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.548486 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfpnw\" (UniqueName: \"kubernetes.io/projected/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-kube-api-access-xfpnw\") pod \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.548559 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-bundle\") pod \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.548594 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-util\") pod \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\" (UID: \"4b2477a8-1c49-49e9-9ac3-4feb075a95a0\") " Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.549831 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-bundle" (OuterVolumeSpecName: "bundle") pod "4b2477a8-1c49-49e9-9ac3-4feb075a95a0" (UID: "4b2477a8-1c49-49e9-9ac3-4feb075a95a0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.556653 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-kube-api-access-xfpnw" (OuterVolumeSpecName: "kube-api-access-xfpnw") pod "4b2477a8-1c49-49e9-9ac3-4feb075a95a0" (UID: "4b2477a8-1c49-49e9-9ac3-4feb075a95a0"). InnerVolumeSpecName "kube-api-access-xfpnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.561809 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-util" (OuterVolumeSpecName: "util") pod "4b2477a8-1c49-49e9-9ac3-4feb075a95a0" (UID: "4b2477a8-1c49-49e9-9ac3-4feb075a95a0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.650382 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfpnw\" (UniqueName: \"kubernetes.io/projected/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-kube-api-access-xfpnw\") on node \"crc\" DevicePath \"\"" Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.650490 5065 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:33:33 crc kubenswrapper[5065]: I1008 13:33:33.650509 5065 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b2477a8-1c49-49e9-9ac3-4feb075a95a0-util\") on node \"crc\" DevicePath \"\"" Oct 08 13:33:34 crc kubenswrapper[5065]: I1008 13:33:34.125449 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" event={"ID":"4b2477a8-1c49-49e9-9ac3-4feb075a95a0","Type":"ContainerDied","Data":"bdb7a0accb57bf8da6cecdb8f69d8a012a3d5c3bdfe71477e996c701bcb2c470"} Oct 08 13:33:34 crc kubenswrapper[5065]: I1008 13:33:34.126000 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb7a0accb57bf8da6cecdb8f69d8a012a3d5c3bdfe71477e996c701bcb2c470" Oct 08 13:33:34 crc kubenswrapper[5065]: I1008 13:33:34.125550 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.583699 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv"] Oct 08 13:33:37 crc kubenswrapper[5065]: E1008 13:33:37.584205 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerName="pull" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.584216 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerName="pull" Oct 08 13:33:37 crc kubenswrapper[5065]: E1008 13:33:37.584226 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerName="util" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.584232 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerName="util" Oct 08 13:33:37 crc kubenswrapper[5065]: E1008 13:33:37.584243 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerName="extract" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.584249 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerName="extract" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.584367 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2477a8-1c49-49e9-9ac3-4feb075a95a0" containerName="extract" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.584964 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.587066 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-g44gj" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.624885 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv"] Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.705383 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-524b8\" (UniqueName: \"kubernetes.io/projected/357f82f7-a9c7-456b-829b-e9ee64bf828e-kube-api-access-524b8\") pod \"openstack-operator-controller-operator-55f65988b-48cvv\" (UID: \"357f82f7-a9c7-456b-829b-e9ee64bf828e\") " pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.806612 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-524b8\" (UniqueName: \"kubernetes.io/projected/357f82f7-a9c7-456b-829b-e9ee64bf828e-kube-api-access-524b8\") pod \"openstack-operator-controller-operator-55f65988b-48cvv\" (UID: \"357f82f7-a9c7-456b-829b-e9ee64bf828e\") " pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.834529 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-524b8\" (UniqueName: \"kubernetes.io/projected/357f82f7-a9c7-456b-829b-e9ee64bf828e-kube-api-access-524b8\") pod \"openstack-operator-controller-operator-55f65988b-48cvv\" (UID: \"357f82f7-a9c7-456b-829b-e9ee64bf828e\") " pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" Oct 08 13:33:37 crc kubenswrapper[5065]: I1008 13:33:37.903732 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" Oct 08 13:33:38 crc kubenswrapper[5065]: I1008 13:33:38.385879 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv"] Oct 08 13:33:39 crc kubenswrapper[5065]: I1008 13:33:39.160518 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" event={"ID":"357f82f7-a9c7-456b-829b-e9ee64bf828e","Type":"ContainerStarted","Data":"10daa582b69c276b541f236e26402dd593a764a0a3fc40a68e9dbded744bb6dd"} Oct 08 13:33:45 crc kubenswrapper[5065]: I1008 13:33:45.207730 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" event={"ID":"357f82f7-a9c7-456b-829b-e9ee64bf828e","Type":"ContainerStarted","Data":"e4deaeab5edfbff81139a2dc4bc32501930ab76ba3ee52ac562d4b22655d1f71"} Oct 08 13:33:47 crc kubenswrapper[5065]: I1008 13:33:47.222628 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" event={"ID":"357f82f7-a9c7-456b-829b-e9ee64bf828e","Type":"ContainerStarted","Data":"5500d8942419c88ea0ec833d0bb2d3bc29854a4a863c0076e49def5d029f120b"} Oct 08 13:33:47 crc kubenswrapper[5065]: I1008 13:33:47.223099 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" Oct 08 13:33:47 crc kubenswrapper[5065]: I1008 13:33:47.253756 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" podStartSLOduration=1.814427376 podStartE2EDuration="10.253738516s" podCreationTimestamp="2025-10-08 13:33:37 +0000 UTC" firstStartedPulling="2025-10-08 13:33:38.397309942 +0000 UTC m=+920.174691699" lastFinishedPulling="2025-10-08 13:33:46.836621082 +0000 UTC m=+928.614002839" observedRunningTime="2025-10-08 13:33:47.249038003 +0000 UTC m=+929.026419760" watchObservedRunningTime="2025-10-08 13:33:47.253738516 +0000 UTC m=+929.031120273" Oct 08 13:33:54 crc kubenswrapper[5065]: I1008 13:33:54.375744 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:33:54 crc kubenswrapper[5065]: I1008 13:33:54.376381 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:33:54 crc kubenswrapper[5065]: I1008 13:33:54.376457 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:33:54 crc kubenswrapper[5065]: I1008 13:33:54.377068 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1a1c08caf1f5c5ebf44b5caec0b83171c54c6a08c4b6c83a6707f77736bc763"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:33:54 crc kubenswrapper[5065]: I1008 13:33:54.377135 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://f1a1c08caf1f5c5ebf44b5caec0b83171c54c6a08c4b6c83a6707f77736bc763" gracePeriod=600 Oct 08 13:33:55 crc kubenswrapper[5065]: I1008 13:33:55.271367 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="f1a1c08caf1f5c5ebf44b5caec0b83171c54c6a08c4b6c83a6707f77736bc763" exitCode=0 Oct 08 13:33:55 crc kubenswrapper[5065]: I1008 13:33:55.271474 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"f1a1c08caf1f5c5ebf44b5caec0b83171c54c6a08c4b6c83a6707f77736bc763"} Oct 08 13:33:55 crc kubenswrapper[5065]: I1008 13:33:55.271815 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"31f1099402b40e4377d6225bd79cd57be8759f2926970d8fbf7335327beefc81"} Oct 08 13:33:55 crc kubenswrapper[5065]: I1008 13:33:55.271847 5065 scope.go:117] "RemoveContainer" containerID="03687d9c2628c1d5d874abdb932a1eb112aa1d5d672fca57fe617c3d9d4bd54c" Oct 08 13:33:57 crc kubenswrapper[5065]: I1008 13:33:57.906890 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-55f65988b-48cvv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.035883 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.037264 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.039842 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-76nfd" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.049547 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.050776 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.053118 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rcgrj" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.060338 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.065315 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.090324 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.091283 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.094356 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kzskq" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.099155 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.100182 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.105639 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7n2kz" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.115798 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.128645 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.147483 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.148695 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.150513 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vm4rd" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.158048 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.159176 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.162411 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j82d5" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.167022 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.168055 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.171665 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.175903 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.180477 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mghnf" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.182017 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.190267 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprjm\" (UniqueName: \"kubernetes.io/projected/d306130a-6424-4380-8de6-74adc212298d-kube-api-access-dprjm\") pod \"glance-operator-controller-manager-84b9b84486-rlkxt\" (UID: \"d306130a-6424-4380-8de6-74adc212298d\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.190312 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc5xh\" (UniqueName: \"kubernetes.io/projected/72eb96ef-8ded-45ed-a440-be05e49c7667-kube-api-access-jc5xh\") pod \"designate-operator-controller-manager-85d5d9dd78-g9s57\" (UID: \"72eb96ef-8ded-45ed-a440-be05e49c7667\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.190354 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqkv\" (UniqueName: \"kubernetes.io/projected/9848db5e-38fd-4867-a9a3-8945c5c4fc27-kube-api-access-8bqkv\") pod \"cinder-operator-controller-manager-7b7fb68549-q2tsx\" (UID: \"9848db5e-38fd-4867-a9a3-8945c5c4fc27\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.190383 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzcz\" (UniqueName: \"kubernetes.io/projected/1b78b39c-e53e-4efa-96b8-185f730711fb-kube-api-access-vxzcz\") pod \"barbican-operator-controller-manager-658bdf4b74-pjzvh\" (UID: \"1b78b39c-e53e-4efa-96b8-185f730711fb\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.190823 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.206585 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.207814 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.212706 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zvbl9" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.230045 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.257349 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.264128 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.264858 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.265836 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.277922 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qgk8s" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.282592 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.282770 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-w8sdx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.293905 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce67216a-bf27-40b0-8beb-bec511f71d94-cert\") pod \"infra-operator-controller-manager-656bcbd775-gwsl7\" (UID: \"ce67216a-bf27-40b0-8beb-bec511f71d94\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.293972 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprjm\" (UniqueName: \"kubernetes.io/projected/d306130a-6424-4380-8de6-74adc212298d-kube-api-access-dprjm\") pod \"glance-operator-controller-manager-84b9b84486-rlkxt\" (UID: \"d306130a-6424-4380-8de6-74adc212298d\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.293999 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc5xh\" (UniqueName: \"kubernetes.io/projected/72eb96ef-8ded-45ed-a440-be05e49c7667-kube-api-access-jc5xh\") pod \"designate-operator-controller-manager-85d5d9dd78-g9s57\" (UID: \"72eb96ef-8ded-45ed-a440-be05e49c7667\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.294020 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lhwn\" (UniqueName: \"kubernetes.io/projected/fc91f24e-897a-45d0-8119-d3a5e75e989d-kube-api-access-8lhwn\") pod \"ironic-operator-controller-manager-9c5c78d49-dskvg\" (UID: \"fc91f24e-897a-45d0-8119-d3a5e75e989d\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.294041 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pmw\" (UniqueName: \"kubernetes.io/projected/8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba-kube-api-access-l8pmw\") pod \"horizon-operator-controller-manager-7ffbcb7588-x8kpn\" (UID: \"8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.294072 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgwk\" (UniqueName: \"kubernetes.io/projected/ce67216a-bf27-40b0-8beb-bec511f71d94-kube-api-access-rkgwk\") pod \"infra-operator-controller-manager-656bcbd775-gwsl7\" (UID: \"ce67216a-bf27-40b0-8beb-bec511f71d94\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.294107 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqkv\" (UniqueName: \"kubernetes.io/projected/9848db5e-38fd-4867-a9a3-8945c5c4fc27-kube-api-access-8bqkv\") pod \"cinder-operator-controller-manager-7b7fb68549-q2tsx\" (UID: \"9848db5e-38fd-4867-a9a3-8945c5c4fc27\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.294131 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxzx\" (UniqueName: \"kubernetes.io/projected/8415abb2-d31f-443c-b458-775e281540a6-kube-api-access-4wxzx\") pod \"heat-operator-controller-manager-858f76bbdd-xlm5g\" (UID: \"8415abb2-d31f-443c-b458-775e281540a6\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.294158 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzcz\" (UniqueName: \"kubernetes.io/projected/1b78b39c-e53e-4efa-96b8-185f730711fb-kube-api-access-vxzcz\") pod \"barbican-operator-controller-manager-658bdf4b74-pjzvh\" (UID: \"1b78b39c-e53e-4efa-96b8-185f730711fb\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.342796 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.344610 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.346968 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc5xh\" (UniqueName: \"kubernetes.io/projected/72eb96ef-8ded-45ed-a440-be05e49c7667-kube-api-access-jc5xh\") pod \"designate-operator-controller-manager-85d5d9dd78-g9s57\" (UID: \"72eb96ef-8ded-45ed-a440-be05e49c7667\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.347113 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vrxfn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.350084 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqkv\" (UniqueName: \"kubernetes.io/projected/9848db5e-38fd-4867-a9a3-8945c5c4fc27-kube-api-access-8bqkv\") pod \"cinder-operator-controller-manager-7b7fb68549-q2tsx\" (UID: \"9848db5e-38fd-4867-a9a3-8945c5c4fc27\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.352195 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzcz\" (UniqueName: \"kubernetes.io/projected/1b78b39c-e53e-4efa-96b8-185f730711fb-kube-api-access-vxzcz\") pod \"barbican-operator-controller-manager-658bdf4b74-pjzvh\" (UID: \"1b78b39c-e53e-4efa-96b8-185f730711fb\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.354878 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.362078 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprjm\" (UniqueName: \"kubernetes.io/projected/d306130a-6424-4380-8de6-74adc212298d-kube-api-access-dprjm\") pod \"glance-operator-controller-manager-84b9b84486-rlkxt\" (UID: \"d306130a-6424-4380-8de6-74adc212298d\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.368490 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.383898 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.384932 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.388161 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zkjf9" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.397472 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398081 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lhwn\" (UniqueName: \"kubernetes.io/projected/fc91f24e-897a-45d0-8119-d3a5e75e989d-kube-api-access-8lhwn\") pod \"ironic-operator-controller-manager-9c5c78d49-dskvg\" (UID: \"fc91f24e-897a-45d0-8119-d3a5e75e989d\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398107 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pmw\" (UniqueName: \"kubernetes.io/projected/8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba-kube-api-access-l8pmw\") pod \"horizon-operator-controller-manager-7ffbcb7588-x8kpn\" (UID: \"8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398138 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgwk\" (UniqueName: \"kubernetes.io/projected/ce67216a-bf27-40b0-8beb-bec511f71d94-kube-api-access-rkgwk\") pod \"infra-operator-controller-manager-656bcbd775-gwsl7\" (UID: \"ce67216a-bf27-40b0-8beb-bec511f71d94\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398160 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxzx\" (UniqueName: \"kubernetes.io/projected/8415abb2-d31f-443c-b458-775e281540a6-kube-api-access-4wxzx\") pod \"heat-operator-controller-manager-858f76bbdd-xlm5g\" (UID: \"8415abb2-d31f-443c-b458-775e281540a6\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398188 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24kcx\" (UniqueName: \"kubernetes.io/projected/6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a-kube-api-access-24kcx\") pod \"mariadb-operator-controller-manager-f9fb45f8f-rqvgq\" (UID: \"6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398223 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lgn\" (UniqueName: \"kubernetes.io/projected/c4e220c5-f7d7-41aa-b250-94c0fc693dd9-kube-api-access-n2lgn\") pod \"keystone-operator-controller-manager-55b6b7c7b8-p8j4j\" (UID: \"c4e220c5-f7d7-41aa-b250-94c0fc693dd9\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398252 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce67216a-bf27-40b0-8beb-bec511f71d94-cert\") pod \"infra-operator-controller-manager-656bcbd775-gwsl7\" (UID: \"ce67216a-bf27-40b0-8beb-bec511f71d94\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.398268 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j67cm\" (UniqueName: \"kubernetes.io/projected/55bc4a42-9132-41b0-bd10-d05a51fff80e-kube-api-access-j67cm\") pod \"manila-operator-controller-manager-5f67fbc655-zptvr\" (UID: \"55bc4a42-9132-41b0-bd10-d05a51fff80e\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" Oct 08 13:34:14 crc kubenswrapper[5065]: E1008 13:34:14.398850 5065 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 08 13:34:14 crc kubenswrapper[5065]: E1008 13:34:14.398895 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce67216a-bf27-40b0-8beb-bec511f71d94-cert podName:ce67216a-bf27-40b0-8beb-bec511f71d94 nodeName:}" failed. No retries permitted until 2025-10-08 13:34:14.898879477 +0000 UTC m=+956.676261234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce67216a-bf27-40b0-8beb-bec511f71d94-cert") pod "infra-operator-controller-manager-656bcbd775-gwsl7" (UID: "ce67216a-bf27-40b0-8beb-bec511f71d94") : secret "infra-operator-webhook-server-cert" not found Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.413263 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.421013 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.426855 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxzx\" (UniqueName: \"kubernetes.io/projected/8415abb2-d31f-443c-b458-775e281540a6-kube-api-access-4wxzx\") pod \"heat-operator-controller-manager-858f76bbdd-xlm5g\" (UID: \"8415abb2-d31f-443c-b458-775e281540a6\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.428703 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pmw\" (UniqueName: \"kubernetes.io/projected/8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba-kube-api-access-l8pmw\") pod \"horizon-operator-controller-manager-7ffbcb7588-x8kpn\" (UID: \"8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.431608 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.432701 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.436138 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.438086 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5zdhw" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.438171 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.439143 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lhwn\" (UniqueName: \"kubernetes.io/projected/fc91f24e-897a-45d0-8119-d3a5e75e989d-kube-api-access-8lhwn\") pod \"ironic-operator-controller-manager-9c5c78d49-dskvg\" (UID: \"fc91f24e-897a-45d0-8119-d3a5e75e989d\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.440528 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.440999 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgwk\" (UniqueName: \"kubernetes.io/projected/ce67216a-bf27-40b0-8beb-bec511f71d94-kube-api-access-rkgwk\") pod \"infra-operator-controller-manager-656bcbd775-gwsl7\" (UID: \"ce67216a-bf27-40b0-8beb-bec511f71d94\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.451088 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.451219 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.455977 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-76xjb" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.467404 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.480146 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.481205 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.481845 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.482731 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.484287 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-b5pcq" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.493254 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.494646 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.497537 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-s8qrw" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.497689 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.499093 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j67cm\" (UniqueName: \"kubernetes.io/projected/55bc4a42-9132-41b0-bd10-d05a51fff80e-kube-api-access-j67cm\") pod \"manila-operator-controller-manager-5f67fbc655-zptvr\" (UID: \"55bc4a42-9132-41b0-bd10-d05a51fff80e\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.499191 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24kcx\" (UniqueName: \"kubernetes.io/projected/6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a-kube-api-access-24kcx\") pod \"mariadb-operator-controller-manager-f9fb45f8f-rqvgq\" (UID: \"6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.499234 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqz5g\" (UniqueName: \"kubernetes.io/projected/2e76f34c-8ac9-408e-96ca-0eaf5aa470cf-kube-api-access-vqz5g\") pod \"nova-operator-controller-manager-5df598886f-5k4b8\" (UID: \"2e76f34c-8ac9-408e-96ca-0eaf5aa470cf\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.499256 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcc74\" (UniqueName: \"kubernetes.io/projected/fb914e93-c33d-44ae-a713-7bd24af3faff-kube-api-access-tcc74\") pod \"octavia-operator-controller-manager-69fdcfc5f5-7fv6z\" (UID: \"fb914e93-c33d-44ae-a713-7bd24af3faff\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.499275 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2lgn\" (UniqueName: \"kubernetes.io/projected/c4e220c5-f7d7-41aa-b250-94c0fc693dd9-kube-api-access-n2lgn\") pod \"keystone-operator-controller-manager-55b6b7c7b8-p8j4j\" (UID: \"c4e220c5-f7d7-41aa-b250-94c0fc693dd9\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.499305 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9p6\" (UniqueName: \"kubernetes.io/projected/a1ef138e-172a-4b51-aaca-1bfb30b7cc3a-kube-api-access-vm9p6\") pod \"neutron-operator-controller-manager-79d585cb66-tlrpv\" (UID: \"a1ef138e-172a-4b51-aaca-1bfb30b7cc3a\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.508400 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.514054 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.516112 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.520530 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wsndj" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.523834 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j67cm\" (UniqueName: \"kubernetes.io/projected/55bc4a42-9132-41b0-bd10-d05a51fff80e-kube-api-access-j67cm\") pod \"manila-operator-controller-manager-5f67fbc655-zptvr\" (UID: \"55bc4a42-9132-41b0-bd10-d05a51fff80e\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.525295 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.536517 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.537187 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24kcx\" (UniqueName: \"kubernetes.io/projected/6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a-kube-api-access-24kcx\") pod \"mariadb-operator-controller-manager-f9fb45f8f-rqvgq\" (UID: \"6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.537244 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.547450 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2lgn\" (UniqueName: \"kubernetes.io/projected/c4e220c5-f7d7-41aa-b250-94c0fc693dd9-kube-api-access-n2lgn\") pod \"keystone-operator-controller-manager-55b6b7c7b8-p8j4j\" (UID: \"c4e220c5-f7d7-41aa-b250-94c0fc693dd9\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.587726 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.599971 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcc74\" (UniqueName: \"kubernetes.io/projected/fb914e93-c33d-44ae-a713-7bd24af3faff-kube-api-access-tcc74\") pod \"octavia-operator-controller-manager-69fdcfc5f5-7fv6z\" (UID: \"fb914e93-c33d-44ae-a713-7bd24af3faff\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.600017 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llf4t\" (UniqueName: \"kubernetes.io/projected/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-kube-api-access-llf4t\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.600055 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9p6\" (UniqueName: \"kubernetes.io/projected/a1ef138e-172a-4b51-aaca-1bfb30b7cc3a-kube-api-access-vm9p6\") pod \"neutron-operator-controller-manager-79d585cb66-tlrpv\" (UID: \"a1ef138e-172a-4b51-aaca-1bfb30b7cc3a\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.600114 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.600162 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvp9\" (UniqueName: \"kubernetes.io/projected/2b64adbd-0608-45f4-aaf8-3d7af011873e-kube-api-access-fjvp9\") pod \"placement-operator-controller-manager-68b6c87b68-9wdcx\" (UID: \"2b64adbd-0608-45f4-aaf8-3d7af011873e\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.600205 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtbs\" (UniqueName: \"kubernetes.io/projected/9d528074-25d9-43df-80ac-e7f4aa8573bc-kube-api-access-cwtbs\") pod \"ovn-operator-controller-manager-79db49b9fb-mw7bb\" (UID: \"9d528074-25d9-43df-80ac-e7f4aa8573bc\") " pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.600265 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqz5g\" (UniqueName: \"kubernetes.io/projected/2e76f34c-8ac9-408e-96ca-0eaf5aa470cf-kube-api-access-vqz5g\") pod \"nova-operator-controller-manager-5df598886f-5k4b8\" (UID: \"2e76f34c-8ac9-408e-96ca-0eaf5aa470cf\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.603866 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.604870 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.606869 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-hhb9t" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.624977 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcc74\" (UniqueName: \"kubernetes.io/projected/fb914e93-c33d-44ae-a713-7bd24af3faff-kube-api-access-tcc74\") pod \"octavia-operator-controller-manager-69fdcfc5f5-7fv6z\" (UID: \"fb914e93-c33d-44ae-a713-7bd24af3faff\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.629140 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9p6\" (UniqueName: \"kubernetes.io/projected/a1ef138e-172a-4b51-aaca-1bfb30b7cc3a-kube-api-access-vm9p6\") pod \"neutron-operator-controller-manager-79d585cb66-tlrpv\" (UID: \"a1ef138e-172a-4b51-aaca-1bfb30b7cc3a\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.629778 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqz5g\" (UniqueName: \"kubernetes.io/projected/2e76f34c-8ac9-408e-96ca-0eaf5aa470cf-kube-api-access-vqz5g\") pod \"nova-operator-controller-manager-5df598886f-5k4b8\" (UID: \"2e76f34c-8ac9-408e-96ca-0eaf5aa470cf\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.635128 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.658697 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.682552 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.684076 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.687234 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sns76" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.690458 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.701986 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.702042 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvp9\" (UniqueName: \"kubernetes.io/projected/2b64adbd-0608-45f4-aaf8-3d7af011873e-kube-api-access-fjvp9\") pod \"placement-operator-controller-manager-68b6c87b68-9wdcx\" (UID: \"2b64adbd-0608-45f4-aaf8-3d7af011873e\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.702080 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtbs\" (UniqueName: \"kubernetes.io/projected/9d528074-25d9-43df-80ac-e7f4aa8573bc-kube-api-access-cwtbs\") pod \"ovn-operator-controller-manager-79db49b9fb-mw7bb\" (UID: \"9d528074-25d9-43df-80ac-e7f4aa8573bc\") " pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.702123 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llf4t\" (UniqueName: \"kubernetes.io/projected/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-kube-api-access-llf4t\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.702186 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqln\" (UniqueName: \"kubernetes.io/projected/bfa86e67-08ac-4df7-84fe-c084e6c05bc1-kube-api-access-6rqln\") pod \"swift-operator-controller-manager-db6d7f97b-947sm\" (UID: \"bfa86e67-08ac-4df7-84fe-c084e6c05bc1\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" Oct 08 13:34:14 crc kubenswrapper[5065]: E1008 13:34:14.702311 5065 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 13:34:14 crc kubenswrapper[5065]: E1008 13:34:14.702347 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert podName:b72ec83c-136e-4cde-8aa9-e978bbe7cd2a nodeName:}" failed. No retries permitted until 2025-10-08 13:34:15.202334111 +0000 UTC m=+956.979715868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert") pod "openstack-baremetal-operator-controller-manager-747747dfccng5kz" (UID: "b72ec83c-136e-4cde-8aa9-e978bbe7cd2a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.740181 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtbs\" (UniqueName: \"kubernetes.io/projected/9d528074-25d9-43df-80ac-e7f4aa8573bc-kube-api-access-cwtbs\") pod \"ovn-operator-controller-manager-79db49b9fb-mw7bb\" (UID: \"9d528074-25d9-43df-80ac-e7f4aa8573bc\") " pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.740464 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56c698c775-nct4q"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.740858 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvp9\" (UniqueName: \"kubernetes.io/projected/2b64adbd-0608-45f4-aaf8-3d7af011873e-kube-api-access-fjvp9\") pod \"placement-operator-controller-manager-68b6c87b68-9wdcx\" (UID: \"2b64adbd-0608-45f4-aaf8-3d7af011873e\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.741712 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.744263 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fl5g9" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.756524 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56c698c775-nct4q"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.780533 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.782059 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.784044 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-46mjb" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.790478 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llf4t\" (UniqueName: \"kubernetes.io/projected/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-kube-api-access-llf4t\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.794026 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.795224 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.803091 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z5cv\" (UniqueName: \"kubernetes.io/projected/4af395f3-a5b6-4f08-9cf9-99ca8ef679f1-kube-api-access-5z5cv\") pod \"test-operator-controller-manager-56c698c775-nct4q\" (UID: \"4af395f3-a5b6-4f08-9cf9-99ca8ef679f1\") " pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.803349 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqln\" (UniqueName: \"kubernetes.io/projected/bfa86e67-08ac-4df7-84fe-c084e6c05bc1-kube-api-access-6rqln\") pod \"swift-operator-controller-manager-db6d7f97b-947sm\" (UID: \"bfa86e67-08ac-4df7-84fe-c084e6c05bc1\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.803478 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5hwg\" (UniqueName: \"kubernetes.io/projected/b9279847-3be8-4917-b1e3-b2d4459f45de-kube-api-access-s5hwg\") pod \"telemetry-operator-controller-manager-76796d4c6b-rwmpv\" (UID: \"b9279847-3be8-4917-b1e3-b2d4459f45de\") " pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.808392 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.827590 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.832083 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqln\" (UniqueName: \"kubernetes.io/projected/bfa86e67-08ac-4df7-84fe-c084e6c05bc1-kube-api-access-6rqln\") pod \"swift-operator-controller-manager-db6d7f97b-947sm\" (UID: \"bfa86e67-08ac-4df7-84fe-c084e6c05bc1\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.834323 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.838078 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.845058 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.845642 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2fc7z" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.858798 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.901825 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.910397 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlkc\" (UniqueName: \"kubernetes.io/projected/cf758b71-afa0-4ca6-a481-4a01aa013427-kube-api-access-fzlkc\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.910774 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.911144 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce67216a-bf27-40b0-8beb-bec511f71d94-cert\") pod \"infra-operator-controller-manager-656bcbd775-gwsl7\" (UID: \"ce67216a-bf27-40b0-8beb-bec511f71d94\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.911273 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5hwg\" (UniqueName: \"kubernetes.io/projected/b9279847-3be8-4917-b1e3-b2d4459f45de-kube-api-access-s5hwg\") pod \"telemetry-operator-controller-manager-76796d4c6b-rwmpv\" (UID: \"b9279847-3be8-4917-b1e3-b2d4459f45de\") " pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.911477 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z5cv\" (UniqueName: \"kubernetes.io/projected/4af395f3-a5b6-4f08-9cf9-99ca8ef679f1-kube-api-access-5z5cv\") pod \"test-operator-controller-manager-56c698c775-nct4q\" (UID: \"4af395f3-a5b6-4f08-9cf9-99ca8ef679f1\") " pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.911547 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rqn8\" (UniqueName: \"kubernetes.io/projected/24e4cd94-cd0b-440f-8574-d93134d9b63d-kube-api-access-8rqn8\") pod \"watcher-operator-controller-manager-7794bc6bd-kjqtn\" (UID: \"24e4cd94-cd0b-440f-8574-d93134d9b63d\") " pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.919292 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.929501 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.931158 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.935299 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.936715 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7ckhr" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.939033 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce67216a-bf27-40b0-8beb-bec511f71d94-cert\") pod \"infra-operator-controller-manager-656bcbd775-gwsl7\" (UID: \"ce67216a-bf27-40b0-8beb-bec511f71d94\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.939136 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z5cv\" (UniqueName: \"kubernetes.io/projected/4af395f3-a5b6-4f08-9cf9-99ca8ef679f1-kube-api-access-5z5cv\") pod \"test-operator-controller-manager-56c698c775-nct4q\" (UID: \"4af395f3-a5b6-4f08-9cf9-99ca8ef679f1\") " pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.956519 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92"] Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.965798 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.967521 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5hwg\" (UniqueName: \"kubernetes.io/projected/b9279847-3be8-4917-b1e3-b2d4459f45de-kube-api-access-s5hwg\") pod \"telemetry-operator-controller-manager-76796d4c6b-rwmpv\" (UID: \"b9279847-3be8-4917-b1e3-b2d4459f45de\") " pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" Oct 08 13:34:14 crc kubenswrapper[5065]: I1008 13:34:14.990488 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.013918 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rqn8\" (UniqueName: \"kubernetes.io/projected/24e4cd94-cd0b-440f-8574-d93134d9b63d-kube-api-access-8rqn8\") pod \"watcher-operator-controller-manager-7794bc6bd-kjqtn\" (UID: \"24e4cd94-cd0b-440f-8574-d93134d9b63d\") " pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.013979 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzlkc\" (UniqueName: \"kubernetes.io/projected/cf758b71-afa0-4ca6-a481-4a01aa013427-kube-api-access-fzlkc\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.014853 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpk9\" (UniqueName: \"kubernetes.io/projected/3d693cfe-0346-4970-ba03-dde30d33fb28-kube-api-access-6mpk9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-j8b92\" (UID: \"3d693cfe-0346-4970-ba03-dde30d33fb28\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.014895 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:15 crc kubenswrapper[5065]: E1008 13:34:15.015094 5065 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 08 13:34:15 crc kubenswrapper[5065]: E1008 13:34:15.015144 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert podName:cf758b71-afa0-4ca6-a481-4a01aa013427 nodeName:}" failed. No retries permitted until 2025-10-08 13:34:15.515125688 +0000 UTC m=+957.292507445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert") pod "openstack-operator-controller-manager-8bc6b8f5b-s7fmg" (UID: "cf758b71-afa0-4ca6-a481-4a01aa013427") : secret "webhook-server-cert" not found Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.037367 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.041776 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.043362 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzlkc\" (UniqueName: \"kubernetes.io/projected/cf758b71-afa0-4ca6-a481-4a01aa013427-kube-api-access-fzlkc\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.046250 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rqn8\" (UniqueName: \"kubernetes.io/projected/24e4cd94-cd0b-440f-8574-d93134d9b63d-kube-api-access-8rqn8\") pod \"watcher-operator-controller-manager-7794bc6bd-kjqtn\" (UID: \"24e4cd94-cd0b-440f-8574-d93134d9b63d\") " pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.108067 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.116234 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mpk9\" (UniqueName: \"kubernetes.io/projected/3d693cfe-0346-4970-ba03-dde30d33fb28-kube-api-access-6mpk9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-j8b92\" (UID: \"3d693cfe-0346-4970-ba03-dde30d33fb28\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.168408 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mpk9\" (UniqueName: \"kubernetes.io/projected/3d693cfe-0346-4970-ba03-dde30d33fb28-kube-api-access-6mpk9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-j8b92\" (UID: \"3d693cfe-0346-4970-ba03-dde30d33fb28\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.185286 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.209731 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.259709 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:15 crc kubenswrapper[5065]: E1008 13:34:15.259848 5065 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 13:34:15 crc kubenswrapper[5065]: E1008 13:34:15.259891 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert podName:b72ec83c-136e-4cde-8aa9-e978bbe7cd2a nodeName:}" failed. No retries permitted until 2025-10-08 13:34:16.259878267 +0000 UTC m=+958.037260014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert") pod "openstack-baremetal-operator-controller-manager-747747dfccng5kz" (UID: "b72ec83c-136e-4cde-8aa9-e978bbe7cd2a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.281737 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.291794 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.299121 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.424230 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" event={"ID":"9848db5e-38fd-4867-a9a3-8945c5c4fc27","Type":"ContainerStarted","Data":"ffc77d6e2c1a8b675e259a56b69ba14fbad4e5506d40b5ed51fd03d99a2fd977"} Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.425181 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" event={"ID":"d306130a-6424-4380-8de6-74adc212298d","Type":"ContainerStarted","Data":"6d31ace6233d9d788b423ca329cfa2f8fda5d3bcda3101064557566d1b841bc3"} Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.426398 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" event={"ID":"72eb96ef-8ded-45ed-a440-be05e49c7667","Type":"ContainerStarted","Data":"7fd5d92ba86c0e286bfea22c7baedd26223266404d88dc2bcaf37b1535792b21"} Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.428661 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" event={"ID":"1b78b39c-e53e-4efa-96b8-185f730711fb","Type":"ContainerStarted","Data":"ef6e63b62d7144be3ff0c4b93b9e4f2a94134b852de1f305e1f3babb97f2e92f"} Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.563842 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:15 crc kubenswrapper[5065]: E1008 13:34:15.564023 5065 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 08 13:34:15 crc kubenswrapper[5065]: E1008 13:34:15.564069 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert podName:cf758b71-afa0-4ca6-a481-4a01aa013427 nodeName:}" failed. No retries permitted until 2025-10-08 13:34:16.564054479 +0000 UTC m=+958.341436236 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert") pod "openstack-operator-controller-manager-8bc6b8f5b-s7fmg" (UID: "cf758b71-afa0-4ca6-a481-4a01aa013427") : secret "webhook-server-cert" not found Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.751520 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.762596 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g"] Oct 08 13:34:15 crc kubenswrapper[5065]: W1008 13:34:15.771829 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8415abb2_d31f_443c_b458_775e281540a6.slice/crio-610e6903da4c711ca404b967c17e055f4d2abd4664c80dbd20477b8ba810ec12 WatchSource:0}: Error finding container 610e6903da4c711ca404b967c17e055f4d2abd4664c80dbd20477b8ba810ec12: Status 404 returned error can't find the container with id 610e6903da4c711ca404b967c17e055f4d2abd4664c80dbd20477b8ba810ec12 Oct 08 13:34:15 crc kubenswrapper[5065]: W1008 13:34:15.773938 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f6d22ca_e21b_4cc9_8640_ac38a35bbd7a.slice/crio-ad66511e79b26a3f312ac33de27a0517a90437bade20966ba24e9d92e516bbcb WatchSource:0}: Error finding container ad66511e79b26a3f312ac33de27a0517a90437bade20966ba24e9d92e516bbcb: Status 404 returned error can't find the container with id ad66511e79b26a3f312ac33de27a0517a90437bade20966ba24e9d92e516bbcb Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.785221 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.797260 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.826482 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.838607 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.952830 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv"] Oct 08 13:34:15 crc kubenswrapper[5065]: W1008 13:34:15.961464 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ef138e_172a_4b51_aaca_1bfb30b7cc3a.slice/crio-07af0d68cc8f8e7a557f4a258967253b6a79105f4119a46ef80238bdc5da1433 WatchSource:0}: Error finding container 07af0d68cc8f8e7a557f4a258967253b6a79105f4119a46ef80238bdc5da1433: Status 404 returned error can't find the container with id 07af0d68cc8f8e7a557f4a258967253b6a79105f4119a46ef80238bdc5da1433 Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.961492 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z"] Oct 08 13:34:15 crc kubenswrapper[5065]: I1008 13:34:15.970433 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8"] Oct 08 13:34:15 crc kubenswrapper[5065]: W1008 13:34:15.977047 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb914e93_c33d_44ae_a713_7bd24af3faff.slice/crio-76969c3a61a88efcf9206ac3601e0376bf098d2fe5addd46e6d9df198d8f8f1d WatchSource:0}: Error finding container 76969c3a61a88efcf9206ac3601e0376bf098d2fe5addd46e6d9df198d8f8f1d: Status 404 returned error can't find the container with id 76969c3a61a88efcf9206ac3601e0376bf098d2fe5addd46e6d9df198d8f8f1d Oct 08 13:34:15 crc kubenswrapper[5065]: W1008 13:34:15.977367 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e76f34c_8ac9_408e_96ca_0eaf5aa470cf.slice/crio-4aab9fa10332c2860c897cf201609017ab5bf8d07649cb08df89d5c282fd1c60 WatchSource:0}: Error finding container 4aab9fa10332c2860c897cf201609017ab5bf8d07649cb08df89d5c282fd1c60: Status 404 returned error can't find the container with id 4aab9fa10332c2860c897cf201609017ab5bf8d07649cb08df89d5c282fd1c60 Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.090310 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn"] Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.094250 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb"] Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.109327 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7"] Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.131690 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv"] Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.149672 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56c698c775-nct4q"] Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.153113 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx"] Oct 08 13:34:16 crc kubenswrapper[5065]: W1008 13:34:16.160698 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce67216a_bf27_40b0_8beb_bec511f71d94.slice/crio-46f4c4a1f2dd3af3487f472fae82d8c09470be5f8ccb5e831864255956617b52 WatchSource:0}: Error finding container 46f4c4a1f2dd3af3487f472fae82d8c09470be5f8ccb5e831864255956617b52: Status 404 returned error can't find the container with id 46f4c4a1f2dd3af3487f472fae82d8c09470be5f8ccb5e831864255956617b52 Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.176299 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-656bcbd775-gwsl7_openstack-operators(ce67216a-bf27-40b0-8beb-bec511f71d94): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.176790 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:9d26476523320d70d6d457b91663e8c233ed320d77032a7c57a89ce1aedd3931,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s5hwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76796d4c6b-rwmpv_openstack-operators(b9279847-3be8-4917-b1e3-b2d4459f45de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.179088 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92"] Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.183456 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:efa8fb78cffb573d299ffcc7bab1099affd2dbbab222152092b313074306e0a9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5z5cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56c698c775-nct4q_openstack-operators(4af395f3-a5b6-4f08-9cf9-99ca8ef679f1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.183690 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjvp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-68b6c87b68-9wdcx_openstack-operators(2b64adbd-0608-45f4-aaf8-3d7af011873e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.199088 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm"] Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.276320 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.284651 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rqln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-db6d7f97b-947sm_openstack-operators(bfa86e67-08ac-4df7-84fe-c084e6c05bc1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.284798 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mpk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-j8b92_openstack-operators(3d693cfe-0346-4970-ba03-dde30d33fb28): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.285157 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b72ec83c-136e-4cde-8aa9-e978bbe7cd2a-cert\") pod \"openstack-baremetal-operator-controller-manager-747747dfccng5kz\" (UID: \"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.286307 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" podUID="3d693cfe-0346-4970-ba03-dde30d33fb28" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.382001 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.435719 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" event={"ID":"2e76f34c-8ac9-408e-96ca-0eaf5aa470cf","Type":"ContainerStarted","Data":"4aab9fa10332c2860c897cf201609017ab5bf8d07649cb08df89d5c282fd1c60"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.439873 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" event={"ID":"6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a","Type":"ContainerStarted","Data":"ad66511e79b26a3f312ac33de27a0517a90437bade20966ba24e9d92e516bbcb"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.442139 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" event={"ID":"8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba","Type":"ContainerStarted","Data":"fad27070e024f50305becf05c899878418c1e1399809d48908d5ca771254830d"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.443879 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" event={"ID":"3d693cfe-0346-4970-ba03-dde30d33fb28","Type":"ContainerStarted","Data":"fbb4a759fb098724b61e234da3918161409309df1ea2f73fccb08b2624833c8f"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.445793 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" event={"ID":"ce67216a-bf27-40b0-8beb-bec511f71d94","Type":"ContainerStarted","Data":"46f4c4a1f2dd3af3487f472fae82d8c09470be5f8ccb5e831864255956617b52"} Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.445799 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" podUID="3d693cfe-0346-4970-ba03-dde30d33fb28" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.452586 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" event={"ID":"9d528074-25d9-43df-80ac-e7f4aa8573bc","Type":"ContainerStarted","Data":"70891e7e3eb13eecb0f698c8ca5f9878b2fc4c6ad9d81494d307f6a1287f7f21"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.454398 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" event={"ID":"c4e220c5-f7d7-41aa-b250-94c0fc693dd9","Type":"ContainerStarted","Data":"95f54b8d331444ca8b3fb061199fc0720ea33c1a9068690051c09abd7a944b88"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.459073 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" event={"ID":"55bc4a42-9132-41b0-bd10-d05a51fff80e","Type":"ContainerStarted","Data":"b52e5daa3172a9c152bae83d11feca3cdc9b5f91df16013255bb37ff458cee66"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.462994 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" event={"ID":"2b64adbd-0608-45f4-aaf8-3d7af011873e","Type":"ContainerStarted","Data":"95bc642254cca9fc0c92e3e6c56b7ccd3124a085a8867b43d34add78635cf5c6"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.465162 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" event={"ID":"b9279847-3be8-4917-b1e3-b2d4459f45de","Type":"ContainerStarted","Data":"f12618aedd93e609a71e557fbe3b1ec1946e601fa87ac970aeee0a03cd4c5d53"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.474820 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" event={"ID":"fb914e93-c33d-44ae-a713-7bd24af3faff","Type":"ContainerStarted","Data":"76969c3a61a88efcf9206ac3601e0376bf098d2fe5addd46e6d9df198d8f8f1d"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.481437 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" event={"ID":"8415abb2-d31f-443c-b458-775e281540a6","Type":"ContainerStarted","Data":"610e6903da4c711ca404b967c17e055f4d2abd4664c80dbd20477b8ba810ec12"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.482331 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" event={"ID":"4af395f3-a5b6-4f08-9cf9-99ca8ef679f1","Type":"ContainerStarted","Data":"5757725e3d1dd8bb7b22d40f317599b732d4accde6bf3363ec10cc60a77e33ea"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.488572 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" event={"ID":"bfa86e67-08ac-4df7-84fe-c084e6c05bc1","Type":"ContainerStarted","Data":"c857708151ae3a3508a46595379dc5e13e6f21207da2a8b6fb225969115be1ac"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.503746 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" event={"ID":"a1ef138e-172a-4b51-aaca-1bfb30b7cc3a","Type":"ContainerStarted","Data":"07af0d68cc8f8e7a557f4a258967253b6a79105f4119a46ef80238bdc5da1433"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.506541 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" event={"ID":"24e4cd94-cd0b-440f-8574-d93134d9b63d","Type":"ContainerStarted","Data":"3ddd84ff1e6ea94973b32654a9fbe724a1424f69676bd04343a5935ab9814092"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.508491 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" event={"ID":"fc91f24e-897a-45d0-8119-d3a5e75e989d","Type":"ContainerStarted","Data":"d00fb72ac93f7f8df0e208a89978f54510bb3b61405fbd3531e416a3e6059d9c"} Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.580914 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.586969 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf758b71-afa0-4ca6-a481-4a01aa013427-cert\") pod \"openstack-operator-controller-manager-8bc6b8f5b-s7fmg\" (UID: \"cf758b71-afa0-4ca6-a481-4a01aa013427\") " pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.593770 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" podUID="b9279847-3be8-4917-b1e3-b2d4459f45de" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.602354 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" podUID="ce67216a-bf27-40b0-8beb-bec511f71d94" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.603631 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" podUID="4af395f3-a5b6-4f08-9cf9-99ca8ef679f1" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.637674 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" podUID="bfa86e67-08ac-4df7-84fe-c084e6c05bc1" Oct 08 13:34:16 crc kubenswrapper[5065]: E1008 13:34:16.639105 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" podUID="2b64adbd-0608-45f4-aaf8-3d7af011873e" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.768733 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:16 crc kubenswrapper[5065]: I1008 13:34:16.922604 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz"] Oct 08 13:34:16 crc kubenswrapper[5065]: W1008 13:34:16.931846 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb72ec83c_136e_4cde_8aa9_e978bbe7cd2a.slice/crio-bfbce2046270bd34d9ba6a90b06a06d6ded135edcdd16eca56811c7202344069 WatchSource:0}: Error finding container bfbce2046270bd34d9ba6a90b06a06d6ded135edcdd16eca56811c7202344069: Status 404 returned error can't find the container with id bfbce2046270bd34d9ba6a90b06a06d6ded135edcdd16eca56811c7202344069 Oct 08 13:34:17 crc kubenswrapper[5065]: I1008 13:34:17.329730 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg"] Oct 08 13:34:17 crc kubenswrapper[5065]: I1008 13:34:17.524863 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" event={"ID":"ce67216a-bf27-40b0-8beb-bec511f71d94","Type":"ContainerStarted","Data":"320141bbeb35118edc3b0fdea89e2b11b5818371896c87848feca500337eb7fa"} Oct 08 13:34:17 crc kubenswrapper[5065]: I1008 13:34:17.526790 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" event={"ID":"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a","Type":"ContainerStarted","Data":"bfbce2046270bd34d9ba6a90b06a06d6ded135edcdd16eca56811c7202344069"} Oct 08 13:34:17 crc kubenswrapper[5065]: E1008 13:34:17.527681 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" podUID="ce67216a-bf27-40b0-8beb-bec511f71d94" Oct 08 13:34:17 crc kubenswrapper[5065]: I1008 13:34:17.529203 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" event={"ID":"2b64adbd-0608-45f4-aaf8-3d7af011873e","Type":"ContainerStarted","Data":"60ec03d9d00f2761ae4b6b0395ff3bfeb19dcfce3f55434029a85fea8c7a222a"} Oct 08 13:34:17 crc kubenswrapper[5065]: E1008 13:34:17.530759 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff\\\"\"" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" podUID="2b64adbd-0608-45f4-aaf8-3d7af011873e" Oct 08 13:34:17 crc kubenswrapper[5065]: I1008 13:34:17.531956 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" event={"ID":"b9279847-3be8-4917-b1e3-b2d4459f45de","Type":"ContainerStarted","Data":"ac8d31b51a983c7301873513a2c2e2f49e0e92b8617ea8733cff761562b7c5ed"} Oct 08 13:34:17 crc kubenswrapper[5065]: E1008 13:34:17.532776 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:9d26476523320d70d6d457b91663e8c233ed320d77032a7c57a89ce1aedd3931\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" podUID="b9279847-3be8-4917-b1e3-b2d4459f45de" Oct 08 13:34:17 crc kubenswrapper[5065]: I1008 13:34:17.534772 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" event={"ID":"4af395f3-a5b6-4f08-9cf9-99ca8ef679f1","Type":"ContainerStarted","Data":"a21140294663afdd60a01a3e7211a718978dbf5f7d2c4c8fbb4d8ca123374b4c"} Oct 08 13:34:17 crc kubenswrapper[5065]: E1008 13:34:17.535840 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:efa8fb78cffb573d299ffcc7bab1099affd2dbbab222152092b313074306e0a9\\\"\"" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" podUID="4af395f3-a5b6-4f08-9cf9-99ca8ef679f1" Oct 08 13:34:17 crc kubenswrapper[5065]: I1008 13:34:17.538911 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" event={"ID":"bfa86e67-08ac-4df7-84fe-c084e6c05bc1","Type":"ContainerStarted","Data":"b3ab3a0b79c0eb40636dbdcb1b145a2487154948b5c9a64e3a12096844058a1a"} Oct 08 13:34:17 crc kubenswrapper[5065]: E1008 13:34:17.549946 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" podUID="bfa86e67-08ac-4df7-84fe-c084e6c05bc1" Oct 08 13:34:17 crc kubenswrapper[5065]: E1008 13:34:17.550011 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" podUID="3d693cfe-0346-4970-ba03-dde30d33fb28" Oct 08 13:34:18 crc kubenswrapper[5065]: W1008 13:34:18.375828 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf758b71_afa0_4ca6_a481_4a01aa013427.slice/crio-494c5678d8962bc87068d7726d7b7af877eddbfc187274afe0ef680cdad0ec57 WatchSource:0}: Error finding container 494c5678d8962bc87068d7726d7b7af877eddbfc187274afe0ef680cdad0ec57: Status 404 returned error can't find the container with id 494c5678d8962bc87068d7726d7b7af877eddbfc187274afe0ef680cdad0ec57 Oct 08 13:34:18 crc kubenswrapper[5065]: I1008 13:34:18.546216 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" event={"ID":"cf758b71-afa0-4ca6-a481-4a01aa013427","Type":"ContainerStarted","Data":"494c5678d8962bc87068d7726d7b7af877eddbfc187274afe0ef680cdad0ec57"} Oct 08 13:34:18 crc kubenswrapper[5065]: E1008 13:34:18.547573 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:9d26476523320d70d6d457b91663e8c233ed320d77032a7c57a89ce1aedd3931\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" podUID="b9279847-3be8-4917-b1e3-b2d4459f45de" Oct 08 13:34:18 crc kubenswrapper[5065]: E1008 13:34:18.547787 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" podUID="ce67216a-bf27-40b0-8beb-bec511f71d94" Oct 08 13:34:18 crc kubenswrapper[5065]: E1008 13:34:18.547873 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" podUID="bfa86e67-08ac-4df7-84fe-c084e6c05bc1" Oct 08 13:34:18 crc kubenswrapper[5065]: E1008 13:34:18.549717 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:efa8fb78cffb573d299ffcc7bab1099affd2dbbab222152092b313074306e0a9\\\"\"" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" podUID="4af395f3-a5b6-4f08-9cf9-99ca8ef679f1" Oct 08 13:34:18 crc kubenswrapper[5065]: E1008 13:34:18.549745 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff\\\"\"" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" podUID="2b64adbd-0608-45f4-aaf8-3d7af011873e" Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.618236 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" event={"ID":"72eb96ef-8ded-45ed-a440-be05e49c7667","Type":"ContainerStarted","Data":"a86bcd24d77d1dda06de8724385b2bcc4520c588ec6bc16c54edfde00140ec37"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.620150 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" event={"ID":"d306130a-6424-4380-8de6-74adc212298d","Type":"ContainerStarted","Data":"0935f6868698a080ff500aa072805dcec4e1511292f003c561fe3f477710eadf"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.621464 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" event={"ID":"6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a","Type":"ContainerStarted","Data":"a9db5d28c77746ade69088a78ad21315fb2e09ed3608318f557b3a0358c9e4fc"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.626029 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" event={"ID":"9d528074-25d9-43df-80ac-e7f4aa8573bc","Type":"ContainerStarted","Data":"69e79a8a6fbd03519e7987519d39e4c150c254b3ead506494a61b8c6640e40c1"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.627901 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" event={"ID":"c4e220c5-f7d7-41aa-b250-94c0fc693dd9","Type":"ContainerStarted","Data":"6e3fed46ba94bf80daa056b99f19e3648474b2d7c7a492549f7c2c8136d5965d"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.631933 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" event={"ID":"fc91f24e-897a-45d0-8119-d3a5e75e989d","Type":"ContainerStarted","Data":"95d08c2b4abce4664f81c8d53015c0f2713ca1bb73d4630d404b8b58002671e1"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.633136 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" event={"ID":"2e76f34c-8ac9-408e-96ca-0eaf5aa470cf","Type":"ContainerStarted","Data":"ddc05cc225750550fd2a7b568cdf0a3ac5ed0a70899fd50041b938f16c163fa9"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.645580 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" event={"ID":"a1ef138e-172a-4b51-aaca-1bfb30b7cc3a","Type":"ContainerStarted","Data":"658843a81809c3928fcbb7786046836095cd31c3f3b49985b7b1ff7cf166e94e"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.649347 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" event={"ID":"9848db5e-38fd-4867-a9a3-8945c5c4fc27","Type":"ContainerStarted","Data":"9ef458974b7bcd405790d172edd5da8aaa50c84d5ddf4060e712ac80dfb207f1"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.652237 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" event={"ID":"cf758b71-afa0-4ca6-a481-4a01aa013427","Type":"ContainerStarted","Data":"95740bf374700c5865e374ebadc4ee6179897f4d0d10d28edd82e1f1bd54d11b"} Oct 08 13:34:25 crc kubenswrapper[5065]: I1008 13:34:25.655651 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" event={"ID":"1b78b39c-e53e-4efa-96b8-185f730711fb","Type":"ContainerStarted","Data":"ef683a4e375c0475431faae73ebb5024ba4a325d777663b552305e6d6718f27c"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.663643 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" event={"ID":"24e4cd94-cd0b-440f-8574-d93134d9b63d","Type":"ContainerStarted","Data":"dc8113326af3aec4aa2201e5d9e787cb3d421c6fde50450228cd72b14e50d5ff"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.664015 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" event={"ID":"24e4cd94-cd0b-440f-8574-d93134d9b63d","Type":"ContainerStarted","Data":"377fa8421e300183e7d1af61ea9b93a8ba784a8b1533b463059eba2bfedd9c89"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.664041 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.665673 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" event={"ID":"d306130a-6424-4380-8de6-74adc212298d","Type":"ContainerStarted","Data":"cf421920f7be9f5df444c60710c54ac98e591129801d5f89c212aeadd487fa85"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.665804 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.667549 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" event={"ID":"8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba","Type":"ContainerStarted","Data":"28dbc8b22b74973e606ef915e6de34de0b41f839528df21d4786e0ffe8aa6d13"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.667585 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" event={"ID":"8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba","Type":"ContainerStarted","Data":"d5b2461234b04190b24806c929f655f159a9fb0de76e02848933510874c09188"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.667835 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.669452 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" event={"ID":"a1ef138e-172a-4b51-aaca-1bfb30b7cc3a","Type":"ContainerStarted","Data":"406bb860e0859f881f2faf37e2ef27c2ea04d653beeadef749dbc42d2dbdc196"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.669655 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.671427 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" event={"ID":"55bc4a42-9132-41b0-bd10-d05a51fff80e","Type":"ContainerStarted","Data":"cfaad0e34a2bd55cadbe34bcc8ec108ff28aaeb05e10b6077806a5d6ccb9a774"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.671467 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" event={"ID":"55bc4a42-9132-41b0-bd10-d05a51fff80e","Type":"ContainerStarted","Data":"4d5e873abee975348f25042b9aaf6e066d9cf6ecd105b7d5415f91991630aac7"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.671571 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.673988 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" event={"ID":"fb914e93-c33d-44ae-a713-7bd24af3faff","Type":"ContainerStarted","Data":"262542670d6efba31808d89cef549dfff176f952306ddf9a117d82f4aff3adcb"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.674110 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.674175 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" event={"ID":"fb914e93-c33d-44ae-a713-7bd24af3faff","Type":"ContainerStarted","Data":"0dd248b3084d18517884eae349b68971b2d61bffb2f87ab4987c9f63b986145d"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.676495 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" event={"ID":"72eb96ef-8ded-45ed-a440-be05e49c7667","Type":"ContainerStarted","Data":"e0d24a2ea620f5f0ed7777a6401723e63d0e5ffc215b1f1e9066f06a79c76430"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.676747 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.678609 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" event={"ID":"fc91f24e-897a-45d0-8119-d3a5e75e989d","Type":"ContainerStarted","Data":"c9684cf66c34b58fa38851ba4b12f272a9d1ccdcb7666a79e0fe3ebca822f553"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.678712 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.680441 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" event={"ID":"c4e220c5-f7d7-41aa-b250-94c0fc693dd9","Type":"ContainerStarted","Data":"34db414a4934afe3fa050c6366724367e51329a2449fae5ef404b9c7895cfaf3"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.680536 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.682030 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" event={"ID":"cf758b71-afa0-4ca6-a481-4a01aa013427","Type":"ContainerStarted","Data":"644ac3a29486baf61cec63bc253e6272fb0e1d5b9593f6bcdcd082b3cd16741b"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.682090 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.683676 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" event={"ID":"2e76f34c-8ac9-408e-96ca-0eaf5aa470cf","Type":"ContainerStarted","Data":"50ec32fe8d035a21b3b089e5e01322b908a619c1a9d121cdd3cda6bbbdcc4e6f"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.683940 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.685833 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" event={"ID":"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a","Type":"ContainerStarted","Data":"0143ed7d3177b9d0445e62c32c3540aee37078cc59b43fa3f1b593477b9d2ba2"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.685874 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" event={"ID":"b72ec83c-136e-4cde-8aa9-e978bbe7cd2a","Type":"ContainerStarted","Data":"4255cff360e52fe4f250628be5d8a84dbccf52b4dada51803a81e0129a02e781"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.685955 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.689630 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" event={"ID":"6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a","Type":"ContainerStarted","Data":"486fae363c50b267035fc99897b2bdd170c31e48177808bdfbcb608845ad14fc"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.689876 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.691833 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" event={"ID":"9d528074-25d9-43df-80ac-e7f4aa8573bc","Type":"ContainerStarted","Data":"4d900b90cf3c2c2e1c8e4811863692a538bf14ec73971e8f6413a44caa8c5fda"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.692221 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.692498 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" podStartSLOduration=3.88594297 podStartE2EDuration="12.692480058s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.153775676 +0000 UTC m=+957.931157433" lastFinishedPulling="2025-10-08 13:34:24.960312764 +0000 UTC m=+966.737694521" observedRunningTime="2025-10-08 13:34:26.687768825 +0000 UTC m=+968.465150602" watchObservedRunningTime="2025-10-08 13:34:26.692480058 +0000 UTC m=+968.469861815" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.696765 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" event={"ID":"9848db5e-38fd-4867-a9a3-8945c5c4fc27","Type":"ContainerStarted","Data":"0c085a5b9d42217dbb1e6ed6ce5f79d23bcf85472b2450196ad7f16e89d55192"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.696949 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.702257 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" event={"ID":"8415abb2-d31f-443c-b458-775e281540a6","Type":"ContainerStarted","Data":"a8ae5377583effa4389a506759255d8c988307b6bf0ba2f54b8455f975e66119"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.702322 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" event={"ID":"8415abb2-d31f-443c-b458-775e281540a6","Type":"ContainerStarted","Data":"38425fd8e207bdb5d56a9be0276fbda50a5bf6be8fb30855bf07a7b3c46afaba"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.702393 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.704204 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" event={"ID":"1b78b39c-e53e-4efa-96b8-185f730711fb","Type":"ContainerStarted","Data":"e165a6e1a9a440a0a897aa3165c6d4ba0ded54826b03fbef85024a9c6994eefd"} Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.704383 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.719888 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" podStartSLOduration=3.6980121969999997 podStartE2EDuration="12.719866254s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.981314807 +0000 UTC m=+957.758696564" lastFinishedPulling="2025-10-08 13:34:25.003168864 +0000 UTC m=+966.780550621" observedRunningTime="2025-10-08 13:34:26.714655558 +0000 UTC m=+968.492037325" watchObservedRunningTime="2025-10-08 13:34:26.719866254 +0000 UTC m=+968.497248011" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.735852 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" podStartSLOduration=3.754018441 podStartE2EDuration="12.735834511s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.966189162 +0000 UTC m=+957.743570919" lastFinishedPulling="2025-10-08 13:34:24.948005232 +0000 UTC m=+966.725386989" observedRunningTime="2025-10-08 13:34:26.732914605 +0000 UTC m=+968.510296362" watchObservedRunningTime="2025-10-08 13:34:26.735834511 +0000 UTC m=+968.513216268" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.762800 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" podStartSLOduration=12.762778626 podStartE2EDuration="12.762778626s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:34:26.762330424 +0000 UTC m=+968.539712181" watchObservedRunningTime="2025-10-08 13:34:26.762778626 +0000 UTC m=+968.540160393" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.785583 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" podStartSLOduration=3.094406087 podStartE2EDuration="12.785566522s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.170987893 +0000 UTC m=+956.948369650" lastFinishedPulling="2025-10-08 13:34:24.862148328 +0000 UTC m=+966.639530085" observedRunningTime="2025-10-08 13:34:26.784931005 +0000 UTC m=+968.562312762" watchObservedRunningTime="2025-10-08 13:34:26.785566522 +0000 UTC m=+968.562948279" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.817809 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" podStartSLOduration=4.751007016 podStartE2EDuration="12.817790234s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.936077078 +0000 UTC m=+958.713458835" lastFinishedPulling="2025-10-08 13:34:25.002860276 +0000 UTC m=+966.780242053" observedRunningTime="2025-10-08 13:34:26.813750218 +0000 UTC m=+968.591131975" watchObservedRunningTime="2025-10-08 13:34:26.817790234 +0000 UTC m=+968.595171991" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.859257 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" podStartSLOduration=3.735468967 podStartE2EDuration="12.859235848s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.828997575 +0000 UTC m=+957.606379332" lastFinishedPulling="2025-10-08 13:34:24.952764446 +0000 UTC m=+966.730146213" observedRunningTime="2025-10-08 13:34:26.849220466 +0000 UTC m=+968.626602223" watchObservedRunningTime="2025-10-08 13:34:26.859235848 +0000 UTC m=+968.636617605" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.901921 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" podStartSLOduration=3.953828654 podStartE2EDuration="12.901903123s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.980583758 +0000 UTC m=+957.757965535" lastFinishedPulling="2025-10-08 13:34:24.928658247 +0000 UTC m=+966.706040004" observedRunningTime="2025-10-08 13:34:26.869781203 +0000 UTC m=+968.647162990" watchObservedRunningTime="2025-10-08 13:34:26.901903123 +0000 UTC m=+968.679284880" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.907736 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" podStartSLOduration=3.811271797 podStartE2EDuration="12.907715995s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.767513748 +0000 UTC m=+957.544895495" lastFinishedPulling="2025-10-08 13:34:24.863957936 +0000 UTC m=+966.641339693" observedRunningTime="2025-10-08 13:34:26.898998747 +0000 UTC m=+968.676380504" watchObservedRunningTime="2025-10-08 13:34:26.907715995 +0000 UTC m=+968.685097762" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.926095 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" podStartSLOduration=3.806530673 podStartE2EDuration="12.926074085s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.785388995 +0000 UTC m=+957.562770752" lastFinishedPulling="2025-10-08 13:34:24.904932407 +0000 UTC m=+966.682314164" observedRunningTime="2025-10-08 13:34:26.924096353 +0000 UTC m=+968.701478100" watchObservedRunningTime="2025-10-08 13:34:26.926074085 +0000 UTC m=+968.703455852" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.944585 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" podStartSLOduration=3.365835802 podStartE2EDuration="12.944564628s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.326454977 +0000 UTC m=+957.103836734" lastFinishedPulling="2025-10-08 13:34:24.905183803 +0000 UTC m=+966.682565560" observedRunningTime="2025-10-08 13:34:26.940156913 +0000 UTC m=+968.717538670" watchObservedRunningTime="2025-10-08 13:34:26.944564628 +0000 UTC m=+968.721946745" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.970403 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" podStartSLOduration=3.807829108 podStartE2EDuration="12.970384443s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.813218453 +0000 UTC m=+957.590600210" lastFinishedPulling="2025-10-08 13:34:24.975773788 +0000 UTC m=+966.753155545" observedRunningTime="2025-10-08 13:34:26.958632886 +0000 UTC m=+968.736014653" watchObservedRunningTime="2025-10-08 13:34:26.970384443 +0000 UTC m=+968.747766200" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.985909 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" podStartSLOduration=3.386634047 podStartE2EDuration="12.985888169s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.360963729 +0000 UTC m=+957.138345486" lastFinishedPulling="2025-10-08 13:34:24.960217861 +0000 UTC m=+966.737599608" observedRunningTime="2025-10-08 13:34:26.979196664 +0000 UTC m=+968.756578431" watchObservedRunningTime="2025-10-08 13:34:26.985888169 +0000 UTC m=+968.763269936" Oct 08 13:34:26 crc kubenswrapper[5065]: I1008 13:34:26.999552 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" podStartSLOduration=4.177499832 podStartE2EDuration="12.999532935s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.154083874 +0000 UTC m=+957.931465631" lastFinishedPulling="2025-10-08 13:34:24.976116977 +0000 UTC m=+966.753498734" observedRunningTime="2025-10-08 13:34:26.995923021 +0000 UTC m=+968.773304798" watchObservedRunningTime="2025-10-08 13:34:26.999532935 +0000 UTC m=+968.776914702" Oct 08 13:34:27 crc kubenswrapper[5065]: I1008 13:34:27.018726 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" podStartSLOduration=3.820027457 podStartE2EDuration="13.018710247s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.776244696 +0000 UTC m=+957.553626463" lastFinishedPulling="2025-10-08 13:34:24.974927496 +0000 UTC m=+966.752309253" observedRunningTime="2025-10-08 13:34:27.014583999 +0000 UTC m=+968.791965756" watchObservedRunningTime="2025-10-08 13:34:27.018710247 +0000 UTC m=+968.796091994" Oct 08 13:34:27 crc kubenswrapper[5065]: I1008 13:34:27.056063 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" podStartSLOduration=3.309020007 podStartE2EDuration="13.056041913s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.116997031 +0000 UTC m=+956.894378788" lastFinishedPulling="2025-10-08 13:34:24.864018937 +0000 UTC m=+966.641400694" observedRunningTime="2025-10-08 13:34:27.054073651 +0000 UTC m=+968.831455408" watchObservedRunningTime="2025-10-08 13:34:27.056041913 +0000 UTC m=+968.833423680" Oct 08 13:34:27 crc kubenswrapper[5065]: I1008 13:34:27.088725 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" podStartSLOduration=3.89434328 podStartE2EDuration="13.088708377s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:15.780635631 +0000 UTC m=+957.558017388" lastFinishedPulling="2025-10-08 13:34:24.975000728 +0000 UTC m=+966.752382485" observedRunningTime="2025-10-08 13:34:27.084399074 +0000 UTC m=+968.861780831" watchObservedRunningTime="2025-10-08 13:34:27.088708377 +0000 UTC m=+968.866090134" Oct 08 13:34:31 crc kubenswrapper[5065]: I1008 13:34:31.876055 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 13:34:33 crc kubenswrapper[5065]: I1008 13:34:33.761854 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" event={"ID":"ce67216a-bf27-40b0-8beb-bec511f71d94","Type":"ContainerStarted","Data":"73a92980f2b87029d4d41e25c2b10fef292a3076263491095e3f2d01f9e19b05"} Oct 08 13:34:33 crc kubenswrapper[5065]: I1008 13:34:33.763249 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:33 crc kubenswrapper[5065]: I1008 13:34:33.786567 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" podStartSLOduration=2.404056121 podStartE2EDuration="19.78654936s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.176178341 +0000 UTC m=+957.953560098" lastFinishedPulling="2025-10-08 13:34:33.55867155 +0000 UTC m=+975.336053337" observedRunningTime="2025-10-08 13:34:33.779524445 +0000 UTC m=+975.556906212" watchObservedRunningTime="2025-10-08 13:34:33.78654936 +0000 UTC m=+975.563931117" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.359961 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-pjzvh" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.392488 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-q2tsx" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.419930 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-rlkxt" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.442858 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-g9s57" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.494358 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-xlm5g" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.502285 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-x8kpn" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.548889 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-dskvg" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.621456 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-zptvr" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.638961 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-p8j4j" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.801043 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-rqvgq" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.812750 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-tlrpv" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.847091 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5df598886f-5k4b8" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.868260 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-7fv6z" Oct 08 13:34:34 crc kubenswrapper[5065]: I1008 13:34:34.901632 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-79db49b9fb-mw7bb" Oct 08 13:34:35 crc kubenswrapper[5065]: I1008 13:34:35.213496 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7794bc6bd-kjqtn" Oct 08 13:34:36 crc kubenswrapper[5065]: I1008 13:34:36.389166 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-747747dfccng5kz" Oct 08 13:34:36 crc kubenswrapper[5065]: I1008 13:34:36.775687 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8bc6b8f5b-s7fmg" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.796896 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" event={"ID":"2b64adbd-0608-45f4-aaf8-3d7af011873e","Type":"ContainerStarted","Data":"65bac993d25a8a00171b4227cbb65f871691585bf8b56902bfe395ad7f8e66d1"} Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.797484 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.800321 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" event={"ID":"b9279847-3be8-4917-b1e3-b2d4459f45de","Type":"ContainerStarted","Data":"cce9d69fa49deb2c041c01a81d9de93c0ed7d38083236f1ff39ee648ec8c6690"} Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.800507 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.802149 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" event={"ID":"4af395f3-a5b6-4f08-9cf9-99ca8ef679f1","Type":"ContainerStarted","Data":"1ecea7a94f4b9d16f95678a7c436f76e2e138ea223040f072115fa1fe1c1e6a7"} Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.802614 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.803988 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" event={"ID":"bfa86e67-08ac-4df7-84fe-c084e6c05bc1","Type":"ContainerStarted","Data":"cd7858e22362db2eaad28fec035a90d2a1108fd0e4b1f7b7d0b9414e6c267f05"} Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.804268 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.805706 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" event={"ID":"3d693cfe-0346-4970-ba03-dde30d33fb28","Type":"ContainerStarted","Data":"dd0030634d3b487adbf817f1fd59b5d06a87fea734141532f5dd2e293a6116e8"} Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.815102 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" podStartSLOduration=3.116515778 podStartE2EDuration="23.815079826s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.183589145 +0000 UTC m=+957.960970902" lastFinishedPulling="2025-10-08 13:34:36.882153193 +0000 UTC m=+978.659534950" observedRunningTime="2025-10-08 13:34:37.814089239 +0000 UTC m=+979.591471046" watchObservedRunningTime="2025-10-08 13:34:37.815079826 +0000 UTC m=+979.592461583" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.837984 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" podStartSLOduration=3.055918619 podStartE2EDuration="23.837958151s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.176635323 +0000 UTC m=+957.954017090" lastFinishedPulling="2025-10-08 13:34:36.958674865 +0000 UTC m=+978.736056622" observedRunningTime="2025-10-08 13:34:37.831108561 +0000 UTC m=+979.608490318" watchObservedRunningTime="2025-10-08 13:34:37.837958151 +0000 UTC m=+979.615339948" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.845863 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-j8b92" podStartSLOduration=3.247151542 podStartE2EDuration="23.84584591s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.28474651 +0000 UTC m=+958.062128267" lastFinishedPulling="2025-10-08 13:34:36.883440878 +0000 UTC m=+978.660822635" observedRunningTime="2025-10-08 13:34:37.84227371 +0000 UTC m=+979.619655487" watchObservedRunningTime="2025-10-08 13:34:37.84584591 +0000 UTC m=+979.623227667" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.861146 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" podStartSLOduration=3.162785532 podStartE2EDuration="23.861121293s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.183311928 +0000 UTC m=+957.960693685" lastFinishedPulling="2025-10-08 13:34:36.881647669 +0000 UTC m=+978.659029446" observedRunningTime="2025-10-08 13:34:37.855167608 +0000 UTC m=+979.632549375" watchObservedRunningTime="2025-10-08 13:34:37.861121293 +0000 UTC m=+979.638503090" Oct 08 13:34:37 crc kubenswrapper[5065]: I1008 13:34:37.872712 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" podStartSLOduration=3.275625061 podStartE2EDuration="23.872694654s" podCreationTimestamp="2025-10-08 13:34:14 +0000 UTC" firstStartedPulling="2025-10-08 13:34:16.284517724 +0000 UTC m=+958.061899481" lastFinishedPulling="2025-10-08 13:34:36.881587317 +0000 UTC m=+978.658969074" observedRunningTime="2025-10-08 13:34:37.872130769 +0000 UTC m=+979.649512536" watchObservedRunningTime="2025-10-08 13:34:37.872694654 +0000 UTC m=+979.650076411" Oct 08 13:34:44 crc kubenswrapper[5065]: I1008 13:34:44.939585 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-9wdcx" Oct 08 13:34:44 crc kubenswrapper[5065]: I1008 13:34:44.983067 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-947sm" Oct 08 13:34:45 crc kubenswrapper[5065]: I1008 13:34:45.048185 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76796d4c6b-rwmpv" Oct 08 13:34:45 crc kubenswrapper[5065]: I1008 13:34:45.112853 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-gwsl7" Oct 08 13:34:45 crc kubenswrapper[5065]: I1008 13:34:45.189193 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56c698c775-nct4q" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.859642 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwgjm"] Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.861976 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.865052 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.865661 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-b54ms" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.865784 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.866193 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.871151 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwgjm"] Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.924384 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-6vz8k"] Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.926008 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.932307 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 08 13:34:58 crc kubenswrapper[5065]: I1008 13:34:58.937078 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-6vz8k"] Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.024274 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65p4\" (UniqueName: \"kubernetes.io/projected/d102863a-f829-47cd-9de5-5f9fdb59dab2-kube-api-access-q65p4\") pod \"dnsmasq-dns-7bfcb9d745-wwgjm\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.024326 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d102863a-f829-47cd-9de5-5f9fdb59dab2-config\") pod \"dnsmasq-dns-7bfcb9d745-wwgjm\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.024583 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-config\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.024663 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-dns-svc\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.024750 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz4b5\" (UniqueName: \"kubernetes.io/projected/adb42c24-7e1e-49c0-8ebe-650683d96a1c-kube-api-access-hz4b5\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.126348 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q65p4\" (UniqueName: \"kubernetes.io/projected/d102863a-f829-47cd-9de5-5f9fdb59dab2-kube-api-access-q65p4\") pod \"dnsmasq-dns-7bfcb9d745-wwgjm\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.126439 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d102863a-f829-47cd-9de5-5f9fdb59dab2-config\") pod \"dnsmasq-dns-7bfcb9d745-wwgjm\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.126519 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-config\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.126555 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-dns-svc\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.126594 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz4b5\" (UniqueName: \"kubernetes.io/projected/adb42c24-7e1e-49c0-8ebe-650683d96a1c-kube-api-access-hz4b5\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.127557 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-config\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.127618 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-dns-svc\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.127776 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d102863a-f829-47cd-9de5-5f9fdb59dab2-config\") pod \"dnsmasq-dns-7bfcb9d745-wwgjm\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.145129 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q65p4\" (UniqueName: \"kubernetes.io/projected/d102863a-f829-47cd-9de5-5f9fdb59dab2-kube-api-access-q65p4\") pod \"dnsmasq-dns-7bfcb9d745-wwgjm\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.148252 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz4b5\" (UniqueName: \"kubernetes.io/projected/adb42c24-7e1e-49c0-8ebe-650683d96a1c-kube-api-access-hz4b5\") pod \"dnsmasq-dns-758b79db4c-6vz8k\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.181541 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.238779 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:34:59 crc kubenswrapper[5065]: W1008 13:34:59.642042 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd102863a_f829_47cd_9de5_5f9fdb59dab2.slice/crio-dcebe7e47b0da4eee1002ce5c284c1c401212be41b5d38bcf6926353a5314b0a WatchSource:0}: Error finding container dcebe7e47b0da4eee1002ce5c284c1c401212be41b5d38bcf6926353a5314b0a: Status 404 returned error can't find the container with id dcebe7e47b0da4eee1002ce5c284c1c401212be41b5d38bcf6926353a5314b0a Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.642942 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwgjm"] Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.770599 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-6vz8k"] Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.978900 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" event={"ID":"d102863a-f829-47cd-9de5-5f9fdb59dab2","Type":"ContainerStarted","Data":"dcebe7e47b0da4eee1002ce5c284c1c401212be41b5d38bcf6926353a5314b0a"} Oct 08 13:34:59 crc kubenswrapper[5065]: I1008 13:34:59.980979 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" event={"ID":"adb42c24-7e1e-49c0-8ebe-650683d96a1c","Type":"ContainerStarted","Data":"200ceda972de5efea1e46627027f7e82bc211f35d7b5f98e692360bd27813770"} Oct 08 13:35:01 crc kubenswrapper[5065]: I1008 13:35:01.805483 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-6vz8k"] Oct 08 13:35:01 crc kubenswrapper[5065]: I1008 13:35:01.840900 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-644597f84c-78hgj"] Oct 08 13:35:01 crc kubenswrapper[5065]: I1008 13:35:01.842040 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:01 crc kubenswrapper[5065]: I1008 13:35:01.861634 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-78hgj"] Oct 08 13:35:01 crc kubenswrapper[5065]: I1008 13:35:01.966238 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-dns-svc\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:01 crc kubenswrapper[5065]: I1008 13:35:01.966316 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-config\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:01 crc kubenswrapper[5065]: I1008 13:35:01.966361 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkwk7\" (UniqueName: \"kubernetes.io/projected/6e9e9334-4e87-400b-b5e7-9ca7d7293233-kube-api-access-qkwk7\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.067983 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-config\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.068054 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkwk7\" (UniqueName: \"kubernetes.io/projected/6e9e9334-4e87-400b-b5e7-9ca7d7293233-kube-api-access-qkwk7\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.068106 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-dns-svc\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.069106 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-dns-svc\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.070142 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-config\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.110112 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkwk7\" (UniqueName: \"kubernetes.io/projected/6e9e9334-4e87-400b-b5e7-9ca7d7293233-kube-api-access-qkwk7\") pod \"dnsmasq-dns-644597f84c-78hgj\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.113009 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwgjm"] Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.135742 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77597f887-g9gk5"] Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.137485 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.146717 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-g9gk5"] Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.164946 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.271587 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-dns-svc\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.271664 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2fk\" (UniqueName: \"kubernetes.io/projected/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-kube-api-access-4k2fk\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.271763 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-config\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.374237 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-config\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.374562 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-dns-svc\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.374604 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k2fk\" (UniqueName: \"kubernetes.io/projected/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-kube-api-access-4k2fk\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.376038 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-config\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.377356 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-dns-svc\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.434642 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k2fk\" (UniqueName: \"kubernetes.io/projected/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-kube-api-access-4k2fk\") pod \"dnsmasq-dns-77597f887-g9gk5\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.469190 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.506566 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-78hgj"] Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.980253 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.982262 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.984060 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.984272 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t2j9v" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.984479 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.984670 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.984824 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.995481 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.997093 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 13:35:02 crc kubenswrapper[5065]: I1008 13:35:02.997442 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.087966 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088087 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088115 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088155 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088185 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088358 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088568 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088700 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8q9c\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-kube-api-access-s8q9c\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088882 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088920 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.088947 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190141 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190463 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190499 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190521 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190548 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8q9c\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-kube-api-access-s8q9c\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190634 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190660 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190678 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190701 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190738 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.190758 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.191399 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.191466 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.191548 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.191662 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.192646 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.192736 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.194609 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.194689 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.195024 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.195342 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.207151 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8q9c\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-kube-api-access-s8q9c\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.215521 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.270287 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.273007 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.282781 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.283344 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.283627 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.283800 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.283821 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8vxtf" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.284002 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.284139 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.284850 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.351315 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.394944 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395001 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a416f725-cd7c-4bd8-9123-28cad18157d9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395038 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395073 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395117 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395153 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a416f725-cd7c-4bd8-9123-28cad18157d9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395176 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvspz\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-kube-api-access-dvspz\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395207 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395257 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395298 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.395328 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497004 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497070 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497106 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497125 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497156 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497173 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a416f725-cd7c-4bd8-9123-28cad18157d9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497208 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497239 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497661 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.498160 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.498848 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.497252 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.498962 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.498974 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.499018 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a416f725-cd7c-4bd8-9123-28cad18157d9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.499039 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvspz\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-kube-api-access-dvspz\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.499249 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.501842 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a416f725-cd7c-4bd8-9123-28cad18157d9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.503234 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.503985 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.515645 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvspz\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-kube-api-access-dvspz\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.515763 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a416f725-cd7c-4bd8-9123-28cad18157d9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.527480 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:03 crc kubenswrapper[5065]: I1008 13:35:03.595655 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.028729 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-78hgj" event={"ID":"6e9e9334-4e87-400b-b5e7-9ca7d7293233","Type":"ContainerStarted","Data":"de18137f956859fa9e8b14b67c4234f817d663adb45c50b6add7ade4691397ec"} Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.570537 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.571812 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.577304 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.579240 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.579570 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.579690 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.583311 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tvmx8" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.584288 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.585242 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628654 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/050c0e99-7984-43be-8701-84602f0c9294-config-data-generated\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628705 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628737 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-operator-scripts\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628764 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-config-data-default\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628813 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwvr\" (UniqueName: \"kubernetes.io/projected/050c0e99-7984-43be-8701-84602f0c9294-kube-api-access-5zwvr\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628874 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-secrets\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628914 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.628968 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-kolla-config\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.629261 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.691690 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.693040 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.697181 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.697348 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.697504 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.697639 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2f97g" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.698808 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731002 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731059 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731082 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731106 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/050c0e99-7984-43be-8701-84602f0c9294-config-data-generated\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731127 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731145 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-operator-scripts\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731165 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-config-data-default\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731182 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731203 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rth65\" (UniqueName: \"kubernetes.io/projected/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kube-api-access-rth65\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731220 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731236 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731263 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwvr\" (UniqueName: \"kubernetes.io/projected/050c0e99-7984-43be-8701-84602f0c9294-kube-api-access-5zwvr\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731287 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731310 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731500 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731609 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-secrets\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731663 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731914 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.731943 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-kolla-config\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.732270 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-config-data-default\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.732371 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-operator-scripts\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.732724 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-kolla-config\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.732787 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/050c0e99-7984-43be-8701-84602f0c9294-config-data-generated\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.753986 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.754230 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.754441 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-secrets\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.758065 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwvr\" (UniqueName: \"kubernetes.io/projected/050c0e99-7984-43be-8701-84602f0c9294-kube-api-access-5zwvr\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.760795 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " pod="openstack/openstack-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834248 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834292 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834324 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834349 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rth65\" (UniqueName: \"kubernetes.io/projected/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kube-api-access-rth65\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834373 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834390 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834431 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834490 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834538 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.834790 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.836724 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.836899 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.838087 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.838401 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.840635 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.846620 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.848479 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.853970 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rth65\" (UniqueName: \"kubernetes.io/projected/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kube-api-access-rth65\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.869060 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:05 crc kubenswrapper[5065]: I1008 13:35:05.912832 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.006474 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.115797 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.117003 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.119822 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.119904 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.132821 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-92xzv" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.141131 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.141222 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdjn9\" (UniqueName: \"kubernetes.io/projected/a29eea83-9d60-4101-a351-6f8468a8116c-kube-api-access-hdjn9\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.141261 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.141295 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-config-data\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.141337 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-kolla-config\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.144668 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.242655 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-kolla-config\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.242757 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.242823 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdjn9\" (UniqueName: \"kubernetes.io/projected/a29eea83-9d60-4101-a351-6f8468a8116c-kube-api-access-hdjn9\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.242895 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.242933 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-config-data\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.243405 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-kolla-config\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.243719 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-config-data\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.249081 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.252490 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.258091 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdjn9\" (UniqueName: \"kubernetes.io/projected/a29eea83-9d60-4101-a351-6f8468a8116c-kube-api-access-hdjn9\") pod \"memcached-0\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " pod="openstack/memcached-0" Oct 08 13:35:06 crc kubenswrapper[5065]: I1008 13:35:06.453067 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 13:35:07 crc kubenswrapper[5065]: I1008 13:35:07.933585 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:35:07 crc kubenswrapper[5065]: I1008 13:35:07.937617 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:35:07 crc kubenswrapper[5065]: I1008 13:35:07.940439 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-j4dcv" Oct 08 13:35:07 crc kubenswrapper[5065]: I1008 13:35:07.941450 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:35:07 crc kubenswrapper[5065]: I1008 13:35:07.968909 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j676f\" (UniqueName: \"kubernetes.io/projected/de6b79be-fc23-4b08-bc30-192946f827af-kube-api-access-j676f\") pod \"kube-state-metrics-0\" (UID: \"de6b79be-fc23-4b08-bc30-192946f827af\") " pod="openstack/kube-state-metrics-0" Oct 08 13:35:08 crc kubenswrapper[5065]: I1008 13:35:08.071352 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j676f\" (UniqueName: \"kubernetes.io/projected/de6b79be-fc23-4b08-bc30-192946f827af-kube-api-access-j676f\") pod \"kube-state-metrics-0\" (UID: \"de6b79be-fc23-4b08-bc30-192946f827af\") " pod="openstack/kube-state-metrics-0" Oct 08 13:35:08 crc kubenswrapper[5065]: I1008 13:35:08.110275 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j676f\" (UniqueName: \"kubernetes.io/projected/de6b79be-fc23-4b08-bc30-192946f827af-kube-api-access-j676f\") pod \"kube-state-metrics-0\" (UID: \"de6b79be-fc23-4b08-bc30-192946f827af\") " pod="openstack/kube-state-metrics-0" Oct 08 13:35:08 crc kubenswrapper[5065]: I1008 13:35:08.304733 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.488106 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xnw9m"] Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.490847 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.493703 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.493876 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.493968 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-94999" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.495142 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f9wxn"] Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.497041 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.516463 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f9wxn"] Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.537357 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xnw9m"] Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561389 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-log-ovn\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561464 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-ovn-controller-tls-certs\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561502 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-run\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561869 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-lib\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561908 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjkg6\" (UniqueName: \"kubernetes.io/projected/f523d852-2e73-4168-b3ca-af18fa28cc07-kube-api-access-xjkg6\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561929 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2m4\" (UniqueName: \"kubernetes.io/projected/4749b7e4-3896-474d-84b3-8ddf351a24ac-kube-api-access-4d2m4\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561961 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.561982 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-etc-ovs\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.562009 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f523d852-2e73-4168-b3ca-af18fa28cc07-scripts\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.562035 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4749b7e4-3896-474d-84b3-8ddf351a24ac-scripts\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.562071 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-combined-ca-bundle\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.562085 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run-ovn\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.562125 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-log\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.665982 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f523d852-2e73-4168-b3ca-af18fa28cc07-scripts\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666300 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4749b7e4-3896-474d-84b3-8ddf351a24ac-scripts\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666352 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-combined-ca-bundle\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666377 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run-ovn\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666433 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-log\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666488 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-log-ovn\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666519 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-ovn-controller-tls-certs\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666553 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-run\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666588 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-lib\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666611 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjkg6\" (UniqueName: \"kubernetes.io/projected/f523d852-2e73-4168-b3ca-af18fa28cc07-kube-api-access-xjkg6\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666634 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d2m4\" (UniqueName: \"kubernetes.io/projected/4749b7e4-3896-474d-84b3-8ddf351a24ac-kube-api-access-4d2m4\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666662 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.666685 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-etc-ovs\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.667293 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-etc-ovs\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.669588 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-run\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.669717 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-lib\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.672537 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run-ovn\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.673128 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-log-ovn\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.673577 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-log\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.673629 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f523d852-2e73-4168-b3ca-af18fa28cc07-scripts\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.673776 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.674298 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4749b7e4-3896-474d-84b3-8ddf351a24ac-scripts\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.684544 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-combined-ca-bundle\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.689688 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-ovn-controller-tls-certs\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.690670 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjkg6\" (UniqueName: \"kubernetes.io/projected/f523d852-2e73-4168-b3ca-af18fa28cc07-kube-api-access-xjkg6\") pod \"ovn-controller-ovs-f9wxn\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.694766 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d2m4\" (UniqueName: \"kubernetes.io/projected/4749b7e4-3896-474d-84b3-8ddf351a24ac-kube-api-access-4d2m4\") pod \"ovn-controller-xnw9m\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.725894 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.752806 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.752912 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.818124 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.818333 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.818531 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.818671 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.818775 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-crqx8" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.820553 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.877231 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922156 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922220 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922373 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922496 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922546 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922596 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-config\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922629 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlzk\" (UniqueName: \"kubernetes.io/projected/38fd97a6-e936-4503-a238-97b63e01a7de-kube-api-access-5mlzk\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:13 crc kubenswrapper[5065]: I1008 13:35:13.922698 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024258 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024346 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-config\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024375 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mlzk\" (UniqueName: \"kubernetes.io/projected/38fd97a6-e936-4503-a238-97b63e01a7de-kube-api-access-5mlzk\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024477 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024520 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024545 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024580 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024651 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.024723 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.025441 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-config\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.025867 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.037440 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.041626 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.041856 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.044163 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mlzk\" (UniqueName: \"kubernetes.io/projected/38fd97a6-e936-4503-a238-97b63e01a7de-kube-api-access-5mlzk\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.045695 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.055530 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.133021 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-g9gk5"] Oct 08 13:35:14 crc kubenswrapper[5065]: I1008 13:35:14.151167 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:14 crc kubenswrapper[5065]: E1008 13:35:14.682345 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 08 13:35:14 crc kubenswrapper[5065]: E1008 13:35:14.683587 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q65p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bfcb9d745-wwgjm_openstack(d102863a-f829-47cd-9de5-5f9fdb59dab2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 13:35:14 crc kubenswrapper[5065]: E1008 13:35:14.685030 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" podUID="d102863a-f829-47cd-9de5-5f9fdb59dab2" Oct 08 13:35:14 crc kubenswrapper[5065]: E1008 13:35:14.745051 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 08 13:35:14 crc kubenswrapper[5065]: E1008 13:35:14.745193 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hz4b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-758b79db4c-6vz8k_openstack(adb42c24-7e1e-49c0-8ebe-650683d96a1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 13:35:14 crc kubenswrapper[5065]: E1008 13:35:14.746553 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" podUID="adb42c24-7e1e-49c0-8ebe-650683d96a1c" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.108928 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-g9gk5" event={"ID":"a86bcb56-84dd-44cc-9e43-07e603bdbb6b","Type":"ContainerStarted","Data":"a0445d689d6418c9516c050ca75dc142ea85a0ceb09ee763ca2ba12890f9de4c"} Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.264884 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.266852 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.271179 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.271496 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.271581 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.271508 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6ctz6" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.276342 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350193 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350241 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350263 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7xl\" (UniqueName: \"kubernetes.io/projected/b215a42c-d422-4db9-a83e-df79f7bff9e6-kube-api-access-nf7xl\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350313 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-config\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350328 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350353 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350380 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.350504 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.427518 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.447250 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452027 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-config\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452072 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452107 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452132 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452205 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452252 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452273 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.452297 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf7xl\" (UniqueName: \"kubernetes.io/projected/b215a42c-d422-4db9-a83e-df79f7bff9e6-kube-api-access-nf7xl\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.453433 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-config\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.454351 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.455504 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.455942 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.459343 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.459850 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.465883 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.472980 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf7xl\" (UniqueName: \"kubernetes.io/projected/b215a42c-d422-4db9-a83e-df79f7bff9e6-kube-api-access-nf7xl\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.480226 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.528362 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.543046 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.554186 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-config\") pod \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.554238 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q65p4\" (UniqueName: \"kubernetes.io/projected/d102863a-f829-47cd-9de5-5f9fdb59dab2-kube-api-access-q65p4\") pod \"d102863a-f829-47cd-9de5-5f9fdb59dab2\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.554373 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-dns-svc\") pod \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.554402 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d102863a-f829-47cd-9de5-5f9fdb59dab2-config\") pod \"d102863a-f829-47cd-9de5-5f9fdb59dab2\" (UID: \"d102863a-f829-47cd-9de5-5f9fdb59dab2\") " Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.554510 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz4b5\" (UniqueName: \"kubernetes.io/projected/adb42c24-7e1e-49c0-8ebe-650683d96a1c-kube-api-access-hz4b5\") pod \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\" (UID: \"adb42c24-7e1e-49c0-8ebe-650683d96a1c\") " Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.555279 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-config" (OuterVolumeSpecName: "config") pod "adb42c24-7e1e-49c0-8ebe-650683d96a1c" (UID: "adb42c24-7e1e-49c0-8ebe-650683d96a1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.555737 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d102863a-f829-47cd-9de5-5f9fdb59dab2-config" (OuterVolumeSpecName: "config") pod "d102863a-f829-47cd-9de5-5f9fdb59dab2" (UID: "d102863a-f829-47cd-9de5-5f9fdb59dab2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.555894 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adb42c24-7e1e-49c0-8ebe-650683d96a1c" (UID: "adb42c24-7e1e-49c0-8ebe-650683d96a1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.558451 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d102863a-f829-47cd-9de5-5f9fdb59dab2-kube-api-access-q65p4" (OuterVolumeSpecName: "kube-api-access-q65p4") pod "d102863a-f829-47cd-9de5-5f9fdb59dab2" (UID: "d102863a-f829-47cd-9de5-5f9fdb59dab2"). InnerVolumeSpecName "kube-api-access-q65p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.565069 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.580692 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb42c24-7e1e-49c0-8ebe-650683d96a1c-kube-api-access-hz4b5" (OuterVolumeSpecName: "kube-api-access-hz4b5") pod "adb42c24-7e1e-49c0-8ebe-650683d96a1c" (UID: "adb42c24-7e1e-49c0-8ebe-650683d96a1c"). InnerVolumeSpecName "kube-api-access-hz4b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.601710 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:15 crc kubenswrapper[5065]: W1008 13:35:15.609894 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde6b79be_fc23_4b08_bc30_192946f827af.slice/crio-ac6cb4ebb0a4079584f6062f4b622297c4b292d76613b132b77d70099fe9b037 WatchSource:0}: Error finding container ac6cb4ebb0a4079584f6062f4b622297c4b292d76613b132b77d70099fe9b037: Status 404 returned error can't find the container with id ac6cb4ebb0a4079584f6062f4b622297c4b292d76613b132b77d70099fe9b037 Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.621453 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.643082 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.656560 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d102863a-f829-47cd-9de5-5f9fdb59dab2-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.656595 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz4b5\" (UniqueName: \"kubernetes.io/projected/adb42c24-7e1e-49c0-8ebe-650683d96a1c-kube-api-access-hz4b5\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.656609 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.656620 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q65p4\" (UniqueName: \"kubernetes.io/projected/d102863a-f829-47cd-9de5-5f9fdb59dab2-kube-api-access-q65p4\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.656632 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adb42c24-7e1e-49c0-8ebe-650683d96a1c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.657572 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.748373 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 13:35:15 crc kubenswrapper[5065]: W1008 13:35:15.755702 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fd97a6_e936_4503_a238_97b63e01a7de.slice/crio-dfdde920204eae35ac2e7b9cdb76c94155c0b6ebbc6c25aec62ddcbd9f298819 WatchSource:0}: Error finding container dfdde920204eae35ac2e7b9cdb76c94155c0b6ebbc6c25aec62ddcbd9f298819: Status 404 returned error can't find the container with id dfdde920204eae35ac2e7b9cdb76c94155c0b6ebbc6c25aec62ddcbd9f298819 Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.807573 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xnw9m"] Oct 08 13:35:15 crc kubenswrapper[5065]: W1008 13:35:15.819767 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4749b7e4_3896_474d_84b3_8ddf351a24ac.slice/crio-e7faa00fac253da8b23fd9286508450d6474c4f49878ad51e850818dedbbc485 WatchSource:0}: Error finding container e7faa00fac253da8b23fd9286508450d6474c4f49878ad51e850818dedbbc485: Status 404 returned error can't find the container with id e7faa00fac253da8b23fd9286508450d6474c4f49878ad51e850818dedbbc485 Oct 08 13:35:15 crc kubenswrapper[5065]: I1008 13:35:15.910162 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f9wxn"] Oct 08 13:35:15 crc kubenswrapper[5065]: W1008 13:35:15.911571 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf523d852_2e73_4168_b3ca_af18fa28cc07.slice/crio-7fb519e84f26876238729134c00763e1fe3488e3657accc660e01f25a7662e66 WatchSource:0}: Error finding container 7fb519e84f26876238729134c00763e1fe3488e3657accc660e01f25a7662e66: Status 404 returned error can't find the container with id 7fb519e84f26876238729134c00763e1fe3488e3657accc660e01f25a7662e66 Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.117621 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de6b79be-fc23-4b08-bc30-192946f827af","Type":"ContainerStarted","Data":"ac6cb4ebb0a4079584f6062f4b622297c4b292d76613b132b77d70099fe9b037"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.120228 5065 generic.go:334] "Generic (PLEG): container finished" podID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerID="3841ecd9605f7468ffd4bad07b1cf6f59958d2d68d9ce2251909c4a6a6c5fdd6" exitCode=0 Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.120296 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-78hgj" event={"ID":"6e9e9334-4e87-400b-b5e7-9ca7d7293233","Type":"ContainerDied","Data":"3841ecd9605f7468ffd4bad07b1cf6f59958d2d68d9ce2251909c4a6a6c5fdd6"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.124256 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" event={"ID":"d102863a-f829-47cd-9de5-5f9fdb59dab2","Type":"ContainerDied","Data":"dcebe7e47b0da4eee1002ce5c284c1c401212be41b5d38bcf6926353a5314b0a"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.124337 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwgjm" Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.129787 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerStarted","Data":"7fb519e84f26876238729134c00763e1fe3488e3657accc660e01f25a7662e66"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.132244 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"38fd97a6-e936-4503-a238-97b63e01a7de","Type":"ContainerStarted","Data":"dfdde920204eae35ac2e7b9cdb76c94155c0b6ebbc6c25aec62ddcbd9f298819"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.136402 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.139459 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-6vz8k" event={"ID":"adb42c24-7e1e-49c0-8ebe-650683d96a1c","Type":"ContainerDied","Data":"200ceda972de5efea1e46627027f7e82bc211f35d7b5f98e692360bd27813770"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.143703 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xnw9m" event={"ID":"4749b7e4-3896-474d-84b3-8ddf351a24ac","Type":"ContainerStarted","Data":"e7faa00fac253da8b23fd9286508450d6474c4f49878ad51e850818dedbbc485"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.145283 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a416f725-cd7c-4bd8-9123-28cad18157d9","Type":"ContainerStarted","Data":"1bf3f683eb743a951ba5b04ff9c520ea57c867de4adaffd0fd976dbfd55f3bdb"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.146102 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae3d89be-0a42-4a3d-914c-3bff67bd37b4","Type":"ContainerStarted","Data":"a27a0b5894255df8201306c7b2fb3a57d2f9bbbc3d9b43efbe6e93aa161c70f1"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.151633 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a29eea83-9d60-4101-a351-6f8468a8116c","Type":"ContainerStarted","Data":"75ab889577e68466624a72eb529ae5fb83b20894770ccc10fea01b43eba853cc"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.153385 5065 generic.go:334] "Generic (PLEG): container finished" podID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerID="acb8d75dd4e3f894577f52a2eba001b33b1d86f784c0a98bb3d6ed644b3e5ae4" exitCode=0 Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.153439 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-g9gk5" event={"ID":"a86bcb56-84dd-44cc-9e43-07e603bdbb6b","Type":"ContainerDied","Data":"acb8d75dd4e3f894577f52a2eba001b33b1d86f784c0a98bb3d6ed644b3e5ae4"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.155116 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"050c0e99-7984-43be-8701-84602f0c9294","Type":"ContainerStarted","Data":"e47b95346f5fb41aa8812383f270346e803298adb9d1ded1f9e88d6c7a33b368"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.163860 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e","Type":"ContainerStarted","Data":"50818428c86541f6d1d579a2c1817e0ca794a71a770d9778453461428ab79f29"} Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.169606 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.233152 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwgjm"] Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.245855 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwgjm"] Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.263265 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-6vz8k"] Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.268347 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-6vz8k"] Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.885517 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb42c24-7e1e-49c0-8ebe-650683d96a1c" path="/var/lib/kubelet/pods/adb42c24-7e1e-49c0-8ebe-650683d96a1c/volumes" Oct 08 13:35:16 crc kubenswrapper[5065]: I1008 13:35:16.887511 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d102863a-f829-47cd-9de5-5f9fdb59dab2" path="/var/lib/kubelet/pods/d102863a-f829-47cd-9de5-5f9fdb59dab2/volumes" Oct 08 13:35:17 crc kubenswrapper[5065]: I1008 13:35:17.172338 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b215a42c-d422-4db9-a83e-df79f7bff9e6","Type":"ContainerStarted","Data":"f33c34dece7eb0326dae40dda26fa89c7b1bf0188c542c1e7483b8e186d2b770"} Oct 08 13:35:17 crc kubenswrapper[5065]: I1008 13:35:17.175550 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-g9gk5" event={"ID":"a86bcb56-84dd-44cc-9e43-07e603bdbb6b","Type":"ContainerStarted","Data":"94e6ae82434c6593177ffc0e838477c8ab17d622b58a293db3f705d708d0cc02"} Oct 08 13:35:17 crc kubenswrapper[5065]: I1008 13:35:17.175625 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:17 crc kubenswrapper[5065]: I1008 13:35:17.177382 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-78hgj" event={"ID":"6e9e9334-4e87-400b-b5e7-9ca7d7293233","Type":"ContainerStarted","Data":"ad50b50c7447c2444c5c9ad3b1429d67a747042f1aae2f11cfb9bc5a112b4d46"} Oct 08 13:35:17 crc kubenswrapper[5065]: I1008 13:35:17.177651 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:17 crc kubenswrapper[5065]: I1008 13:35:17.197003 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77597f887-g9gk5" podStartSLOduration=14.73106645 podStartE2EDuration="15.19692377s" podCreationTimestamp="2025-10-08 13:35:02 +0000 UTC" firstStartedPulling="2025-10-08 13:35:14.662432099 +0000 UTC m=+1016.439813856" lastFinishedPulling="2025-10-08 13:35:15.128289419 +0000 UTC m=+1016.905671176" observedRunningTime="2025-10-08 13:35:17.192784416 +0000 UTC m=+1018.970166173" watchObservedRunningTime="2025-10-08 13:35:17.19692377 +0000 UTC m=+1018.974305527" Oct 08 13:35:17 crc kubenswrapper[5065]: I1008 13:35:17.210780 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-644597f84c-78hgj" podStartSLOduration=6.290642401 podStartE2EDuration="16.210745164s" podCreationTimestamp="2025-10-08 13:35:01 +0000 UTC" firstStartedPulling="2025-10-08 13:35:04.872472376 +0000 UTC m=+1006.649854133" lastFinishedPulling="2025-10-08 13:35:14.792575139 +0000 UTC m=+1016.569956896" observedRunningTime="2025-10-08 13:35:17.209392916 +0000 UTC m=+1018.986774673" watchObservedRunningTime="2025-10-08 13:35:17.210745164 +0000 UTC m=+1018.988126921" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:19.999833 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-l67ps"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.001126 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.003989 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.007089 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l67ps"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.129024 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjk8\" (UniqueName: \"kubernetes.io/projected/4eba221c-653d-434a-a486-16be41c4a5c4-kube-api-access-brjk8\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.129114 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba221c-653d-434a-a486-16be41c4a5c4-config\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.129167 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovn-rundir\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.129200 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.129308 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.129329 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovs-rundir\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.137958 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-78hgj"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.138483 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-644597f84c-78hgj" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerName="dnsmasq-dns" containerID="cri-o://ad50b50c7447c2444c5c9ad3b1429d67a747042f1aae2f11cfb9bc5a112b4d46" gracePeriod=10 Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.185962 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-9dkwf"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.187193 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.189205 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.228285 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-9dkwf"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.233359 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.235881 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236004 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovs-rundir\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236128 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236222 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236434 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzlz\" (UniqueName: \"kubernetes.io/projected/94e16ff3-4737-4183-9939-57ad3d01b4e6-kube-api-access-4bzlz\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236546 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-config\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236670 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brjk8\" (UniqueName: \"kubernetes.io/projected/4eba221c-653d-434a-a486-16be41c4a5c4-kube-api-access-brjk8\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236755 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba221c-653d-434a-a486-16be41c4a5c4-config\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.236873 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovn-rundir\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.237149 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovn-rundir\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.237468 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovs-rundir\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.238091 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba221c-653d-434a-a486-16be41c4a5c4-config\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.240568 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.245692 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.265971 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brjk8\" (UniqueName: \"kubernetes.io/projected/4eba221c-653d-434a-a486-16be41c4a5c4-kube-api-access-brjk8\") pod \"ovn-controller-metrics-l67ps\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.283461 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-g9gk5"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.283673 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77597f887-g9gk5" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerName="dnsmasq-dns" containerID="cri-o://94e6ae82434c6593177ffc0e838477c8ab17d622b58a293db3f705d708d0cc02" gracePeriod=10 Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.311648 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-8sfnw"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.313022 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.320602 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-8sfnw"] Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.322720 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.322792 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.337961 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338006 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338076 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338111 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-config\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338131 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338152 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dws\" (UniqueName: \"kubernetes.io/projected/03fcc65a-a29f-4453-b176-00b55369a0ba-kube-api-access-87dws\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338181 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzlz\" (UniqueName: \"kubernetes.io/projected/94e16ff3-4737-4183-9939-57ad3d01b4e6-kube-api-access-4bzlz\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338203 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-config\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.338248 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.339033 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.339932 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.339987 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-config\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.358455 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzlz\" (UniqueName: \"kubernetes.io/projected/94e16ff3-4737-4183-9939-57ad3d01b4e6-kube-api-access-4bzlz\") pod \"dnsmasq-dns-545fb8c44f-9dkwf\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.439999 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.440059 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-config\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.440078 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.440097 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dws\" (UniqueName: \"kubernetes.io/projected/03fcc65a-a29f-4453-b176-00b55369a0ba-kube-api-access-87dws\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.440148 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.441030 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.441571 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.444299 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.445932 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-config\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.457463 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dws\" (UniqueName: \"kubernetes.io/projected/03fcc65a-a29f-4453-b176-00b55369a0ba-kube-api-access-87dws\") pod \"dnsmasq-dns-dc9d58d7-8sfnw\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.609238 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:20 crc kubenswrapper[5065]: I1008 13:35:20.711457 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:21 crc kubenswrapper[5065]: I1008 13:35:21.238885 5065 generic.go:334] "Generic (PLEG): container finished" podID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerID="94e6ae82434c6593177ffc0e838477c8ab17d622b58a293db3f705d708d0cc02" exitCode=0 Oct 08 13:35:21 crc kubenswrapper[5065]: I1008 13:35:21.238983 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-g9gk5" event={"ID":"a86bcb56-84dd-44cc-9e43-07e603bdbb6b","Type":"ContainerDied","Data":"94e6ae82434c6593177ffc0e838477c8ab17d622b58a293db3f705d708d0cc02"} Oct 08 13:35:21 crc kubenswrapper[5065]: I1008 13:35:21.241941 5065 generic.go:334] "Generic (PLEG): container finished" podID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerID="ad50b50c7447c2444c5c9ad3b1429d67a747042f1aae2f11cfb9bc5a112b4d46" exitCode=0 Oct 08 13:35:21 crc kubenswrapper[5065]: I1008 13:35:21.241969 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-78hgj" event={"ID":"6e9e9334-4e87-400b-b5e7-9ca7d7293233","Type":"ContainerDied","Data":"ad50b50c7447c2444c5c9ad3b1429d67a747042f1aae2f11cfb9bc5a112b4d46"} Oct 08 13:35:22 crc kubenswrapper[5065]: I1008 13:35:22.167582 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-644597f84c-78hgj" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: connect: connection refused" Oct 08 13:35:22 crc kubenswrapper[5065]: I1008 13:35:22.472300 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77597f887-g9gk5" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: connect: connection refused" Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.870281 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.876374 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.935688 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-config\") pod \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.935958 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkwk7\" (UniqueName: \"kubernetes.io/projected/6e9e9334-4e87-400b-b5e7-9ca7d7293233-kube-api-access-qkwk7\") pod \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.936057 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-dns-svc\") pod \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\" (UID: \"6e9e9334-4e87-400b-b5e7-9ca7d7293233\") " Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.941515 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9e9334-4e87-400b-b5e7-9ca7d7293233-kube-api-access-qkwk7" (OuterVolumeSpecName: "kube-api-access-qkwk7") pod "6e9e9334-4e87-400b-b5e7-9ca7d7293233" (UID: "6e9e9334-4e87-400b-b5e7-9ca7d7293233"). InnerVolumeSpecName "kube-api-access-qkwk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.975513 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e9e9334-4e87-400b-b5e7-9ca7d7293233" (UID: "6e9e9334-4e87-400b-b5e7-9ca7d7293233"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:23 crc kubenswrapper[5065]: I1008 13:35:23.977842 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-config" (OuterVolumeSpecName: "config") pod "6e9e9334-4e87-400b-b5e7-9ca7d7293233" (UID: "6e9e9334-4e87-400b-b5e7-9ca7d7293233"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.037678 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-config\") pod \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.037738 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k2fk\" (UniqueName: \"kubernetes.io/projected/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-kube-api-access-4k2fk\") pod \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.037774 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-dns-svc\") pod \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\" (UID: \"a86bcb56-84dd-44cc-9e43-07e603bdbb6b\") " Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.038065 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.038081 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkwk7\" (UniqueName: \"kubernetes.io/projected/6e9e9334-4e87-400b-b5e7-9ca7d7293233-kube-api-access-qkwk7\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.038092 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e9e9334-4e87-400b-b5e7-9ca7d7293233-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.042348 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-kube-api-access-4k2fk" (OuterVolumeSpecName: "kube-api-access-4k2fk") pod "a86bcb56-84dd-44cc-9e43-07e603bdbb6b" (UID: "a86bcb56-84dd-44cc-9e43-07e603bdbb6b"). InnerVolumeSpecName "kube-api-access-4k2fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.093061 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-config" (OuterVolumeSpecName: "config") pod "a86bcb56-84dd-44cc-9e43-07e603bdbb6b" (UID: "a86bcb56-84dd-44cc-9e43-07e603bdbb6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.095200 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a86bcb56-84dd-44cc-9e43-07e603bdbb6b" (UID: "a86bcb56-84dd-44cc-9e43-07e603bdbb6b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.139627 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.139660 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k2fk\" (UniqueName: \"kubernetes.io/projected/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-kube-api-access-4k2fk\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.139672 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a86bcb56-84dd-44cc-9e43-07e603bdbb6b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.272133 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-g9gk5" event={"ID":"a86bcb56-84dd-44cc-9e43-07e603bdbb6b","Type":"ContainerDied","Data":"a0445d689d6418c9516c050ca75dc142ea85a0ceb09ee763ca2ba12890f9de4c"} Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.272181 5065 scope.go:117] "RemoveContainer" containerID="94e6ae82434c6593177ffc0e838477c8ab17d622b58a293db3f705d708d0cc02" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.272296 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-g9gk5" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.275755 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-78hgj" event={"ID":"6e9e9334-4e87-400b-b5e7-9ca7d7293233","Type":"ContainerDied","Data":"de18137f956859fa9e8b14b67c4234f817d663adb45c50b6add7ade4691397ec"} Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.275915 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-78hgj" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.307611 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-g9gk5"] Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.315794 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77597f887-g9gk5"] Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.322248 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-78hgj"] Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.328900 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-78hgj"] Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.413978 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l67ps"] Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.421239 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-8sfnw"] Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.514395 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-9dkwf"] Oct 08 13:35:24 crc kubenswrapper[5065]: W1008 13:35:24.525054 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fcc65a_a29f_4453_b176_00b55369a0ba.slice/crio-57b8db48ac88b228e4b002fa339ab91d7710518b04e966c3af2e7f46661594df WatchSource:0}: Error finding container 57b8db48ac88b228e4b002fa339ab91d7710518b04e966c3af2e7f46661594df: Status 404 returned error can't find the container with id 57b8db48ac88b228e4b002fa339ab91d7710518b04e966c3af2e7f46661594df Oct 08 13:35:24 crc kubenswrapper[5065]: W1008 13:35:24.530979 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94e16ff3_4737_4183_9939_57ad3d01b4e6.slice/crio-5fa33eea4788bf195447e6217d827e706fa447bee0ed221728809eb8f4ace262 WatchSource:0}: Error finding container 5fa33eea4788bf195447e6217d827e706fa447bee0ed221728809eb8f4ace262: Status 404 returned error can't find the container with id 5fa33eea4788bf195447e6217d827e706fa447bee0ed221728809eb8f4ace262 Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.564669 5065 scope.go:117] "RemoveContainer" containerID="acb8d75dd4e3f894577f52a2eba001b33b1d86f784c0a98bb3d6ed644b3e5ae4" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.658494 5065 scope.go:117] "RemoveContainer" containerID="ad50b50c7447c2444c5c9ad3b1429d67a747042f1aae2f11cfb9bc5a112b4d46" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.698108 5065 scope.go:117] "RemoveContainer" containerID="3841ecd9605f7468ffd4bad07b1cf6f59958d2d68d9ce2251909c4a6a6c5fdd6" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.888770 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" path="/var/lib/kubelet/pods/6e9e9334-4e87-400b-b5e7-9ca7d7293233/volumes" Oct 08 13:35:24 crc kubenswrapper[5065]: I1008 13:35:24.889530 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" path="/var/lib/kubelet/pods/a86bcb56-84dd-44cc-9e43-07e603bdbb6b/volumes" Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.286217 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e","Type":"ContainerStarted","Data":"960d5d7c46de5d8c549027a43b4c38ffc8152b31965cc6a2df89d95dbd1e8480"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.290922 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b215a42c-d422-4db9-a83e-df79f7bff9e6","Type":"ContainerStarted","Data":"1d09895c6bf96fc0edb25b62fd359eefd1417b0069aceeca025d88f6c6f9d233"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.292919 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"38fd97a6-e936-4503-a238-97b63e01a7de","Type":"ContainerStarted","Data":"2915271db74b88ab9e16677d8efcfed4d3eb12143a6af327cb8a7a3688f4f413"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.295067 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a29eea83-9d60-4101-a351-6f8468a8116c","Type":"ContainerStarted","Data":"3dd840b2a1968cb45aa4333789027815be04db1faa5fe300d7fbe1813965b970"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.295216 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.299061 5065 generic.go:334] "Generic (PLEG): container finished" podID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerID="b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4" exitCode=0 Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.299124 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" event={"ID":"94e16ff3-4737-4183-9939-57ad3d01b4e6","Type":"ContainerDied","Data":"b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.299147 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" event={"ID":"94e16ff3-4737-4183-9939-57ad3d01b4e6","Type":"ContainerStarted","Data":"5fa33eea4788bf195447e6217d827e706fa447bee0ed221728809eb8f4ace262"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.301852 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"050c0e99-7984-43be-8701-84602f0c9294","Type":"ContainerStarted","Data":"d1bfe7a89169420e3afd6113c1442671a263c0dfbedac845f19da59f06dfd847"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.303528 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l67ps" event={"ID":"4eba221c-653d-434a-a486-16be41c4a5c4","Type":"ContainerStarted","Data":"f267824b75a67ed203718eacd97fef3f37c4807c69adbf8065b78ad02e426938"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.305087 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de6b79be-fc23-4b08-bc30-192946f827af","Type":"ContainerStarted","Data":"b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.305639 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.307712 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerStarted","Data":"504cb4d2f0d2b0818331cd6d07891089513ae3e6588d657954b8285ef3cba2aa"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.322821 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xnw9m" event={"ID":"4749b7e4-3896-474d-84b3-8ddf351a24ac","Type":"ContainerStarted","Data":"a44b760b0eeef2da2d46263b1c69d9a3f20ef2196ecd4ad96ea02f39ea7d5e50"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.323039 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xnw9m" Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.325068 5065 generic.go:334] "Generic (PLEG): container finished" podID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerID="2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6" exitCode=0 Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.325103 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" event={"ID":"03fcc65a-a29f-4453-b176-00b55369a0ba","Type":"ContainerDied","Data":"2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.325121 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" event={"ID":"03fcc65a-a29f-4453-b176-00b55369a0ba","Type":"ContainerStarted","Data":"57b8db48ac88b228e4b002fa339ab91d7710518b04e966c3af2e7f46661594df"} Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.337774 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.369937854 podStartE2EDuration="18.337755057s" podCreationTimestamp="2025-10-08 13:35:07 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.637149661 +0000 UTC m=+1017.414531418" lastFinishedPulling="2025-10-08 13:35:24.604966864 +0000 UTC m=+1026.382348621" observedRunningTime="2025-10-08 13:35:25.328681485 +0000 UTC m=+1027.106063262" watchObservedRunningTime="2025-10-08 13:35:25.337755057 +0000 UTC m=+1027.115136814" Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.412960 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.628749677 podStartE2EDuration="19.412940252s" podCreationTimestamp="2025-10-08 13:35:06 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.544786 +0000 UTC m=+1017.322167757" lastFinishedPulling="2025-10-08 13:35:23.328976575 +0000 UTC m=+1025.106358332" observedRunningTime="2025-10-08 13:35:25.403980464 +0000 UTC m=+1027.181362241" watchObservedRunningTime="2025-10-08 13:35:25.412940252 +0000 UTC m=+1027.190322009" Oct 08 13:35:25 crc kubenswrapper[5065]: I1008 13:35:25.451008 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xnw9m" podStartSLOduration=4.264046342 podStartE2EDuration="12.450990287s" podCreationTimestamp="2025-10-08 13:35:13 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.822709648 +0000 UTC m=+1017.600091405" lastFinishedPulling="2025-10-08 13:35:24.009653593 +0000 UTC m=+1025.787035350" observedRunningTime="2025-10-08 13:35:25.450087112 +0000 UTC m=+1027.227468879" watchObservedRunningTime="2025-10-08 13:35:25.450990287 +0000 UTC m=+1027.228372044" Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.334215 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" event={"ID":"03fcc65a-a29f-4453-b176-00b55369a0ba","Type":"ContainerStarted","Data":"626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30"} Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.334577 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.336957 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae3d89be-0a42-4a3d-914c-3bff67bd37b4","Type":"ContainerStarted","Data":"264b1ee903df6ce1a97e07b64d812c86782e3a58f2f09c609b3c81d9d02ee22a"} Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.339484 5065 generic.go:334] "Generic (PLEG): container finished" podID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerID="504cb4d2f0d2b0818331cd6d07891089513ae3e6588d657954b8285ef3cba2aa" exitCode=0 Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.339551 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerDied","Data":"504cb4d2f0d2b0818331cd6d07891089513ae3e6588d657954b8285ef3cba2aa"} Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.342130 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" event={"ID":"94e16ff3-4737-4183-9939-57ad3d01b4e6","Type":"ContainerStarted","Data":"d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539"} Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.342570 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.344138 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a416f725-cd7c-4bd8-9123-28cad18157d9","Type":"ContainerStarted","Data":"8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009"} Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.381403 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" podStartSLOduration=6.38138777 podStartE2EDuration="6.38138777s" podCreationTimestamp="2025-10-08 13:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:35:26.353768524 +0000 UTC m=+1028.131150281" watchObservedRunningTime="2025-10-08 13:35:26.38138777 +0000 UTC m=+1028.158769527" Oct 08 13:35:26 crc kubenswrapper[5065]: I1008 13:35:26.398523 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" podStartSLOduration=6.398508575 podStartE2EDuration="6.398508575s" podCreationTimestamp="2025-10-08 13:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:35:26.397268371 +0000 UTC m=+1028.174650118" watchObservedRunningTime="2025-10-08 13:35:26.398508575 +0000 UTC m=+1028.175890332" Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.360576 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l67ps" event={"ID":"4eba221c-653d-434a-a486-16be41c4a5c4","Type":"ContainerStarted","Data":"18cb61ad73086df94f6a3e8295ee1d22474fda18c12c345421b646c270cb0232"} Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.363543 5065 generic.go:334] "Generic (PLEG): container finished" podID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerID="960d5d7c46de5d8c549027a43b4c38ffc8152b31965cc6a2df89d95dbd1e8480" exitCode=0 Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.363634 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e","Type":"ContainerDied","Data":"960d5d7c46de5d8c549027a43b4c38ffc8152b31965cc6a2df89d95dbd1e8480"} Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.366396 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerStarted","Data":"5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee"} Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.366459 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerStarted","Data":"a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf"} Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.367133 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.367164 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.369107 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b215a42c-d422-4db9-a83e-df79f7bff9e6","Type":"ContainerStarted","Data":"87f2173752e5478c6a5d9a8376b7108d4661ca6ba8b7eb3ed95cd8acf5ce88a2"} Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.372353 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"38fd97a6-e936-4503-a238-97b63e01a7de","Type":"ContainerStarted","Data":"fbe9f8ee86cd56018708f58dee327d04bc11e4fe4d541910462237c91b46cc3e"} Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.380962 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-l67ps" podStartSLOduration=6.247185125 podStartE2EDuration="9.380942906s" podCreationTimestamp="2025-10-08 13:35:19 +0000 UTC" firstStartedPulling="2025-10-08 13:35:24.519278757 +0000 UTC m=+1026.296660514" lastFinishedPulling="2025-10-08 13:35:27.653036538 +0000 UTC m=+1029.430418295" observedRunningTime="2025-10-08 13:35:28.37710043 +0000 UTC m=+1030.154482207" watchObservedRunningTime="2025-10-08 13:35:28.380942906 +0000 UTC m=+1030.158324663" Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.479064 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.042278492 podStartE2EDuration="14.479036477s" podCreationTimestamp="2025-10-08 13:35:14 +0000 UTC" firstStartedPulling="2025-10-08 13:35:16.200557947 +0000 UTC m=+1017.977939704" lastFinishedPulling="2025-10-08 13:35:27.637315932 +0000 UTC m=+1029.414697689" observedRunningTime="2025-10-08 13:35:28.402645608 +0000 UTC m=+1030.180027365" watchObservedRunningTime="2025-10-08 13:35:28.479036477 +0000 UTC m=+1030.256418234" Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.507186 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f9wxn" podStartSLOduration=7.5619119040000005 podStartE2EDuration="15.507161427s" podCreationTimestamp="2025-10-08 13:35:13 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.913733272 +0000 UTC m=+1017.691115029" lastFinishedPulling="2025-10-08 13:35:23.858982785 +0000 UTC m=+1025.636364552" observedRunningTime="2025-10-08 13:35:28.473158814 +0000 UTC m=+1030.250540581" watchObservedRunningTime="2025-10-08 13:35:28.507161427 +0000 UTC m=+1030.284543184" Oct 08 13:35:28 crc kubenswrapper[5065]: I1008 13:35:28.511113 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.666200911 podStartE2EDuration="16.511101126s" podCreationTimestamp="2025-10-08 13:35:12 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.761636754 +0000 UTC m=+1017.539018521" lastFinishedPulling="2025-10-08 13:35:27.606536979 +0000 UTC m=+1029.383918736" observedRunningTime="2025-10-08 13:35:28.496496131 +0000 UTC m=+1030.273877898" watchObservedRunningTime="2025-10-08 13:35:28.511101126 +0000 UTC m=+1030.288482883" Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.151917 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.152233 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.221214 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.391215 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e","Type":"ContainerStarted","Data":"9c1b2cb3bba97162839d7321cc87203aba920e07e4f99928885d1e4d4f6be3cf"} Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.395662 5065 generic.go:334] "Generic (PLEG): container finished" podID="050c0e99-7984-43be-8701-84602f0c9294" containerID="d1bfe7a89169420e3afd6113c1442671a263c0dfbedac845f19da59f06dfd847" exitCode=0 Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.395759 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"050c0e99-7984-43be-8701-84602f0c9294","Type":"ContainerDied","Data":"d1bfe7a89169420e3afd6113c1442671a263c0dfbedac845f19da59f06dfd847"} Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.444774 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.085352001 podStartE2EDuration="25.444749039s" podCreationTimestamp="2025-10-08 13:35:04 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.649207846 +0000 UTC m=+1017.426589603" lastFinishedPulling="2025-10-08 13:35:24.008604884 +0000 UTC m=+1025.785986641" observedRunningTime="2025-10-08 13:35:29.417107392 +0000 UTC m=+1031.194489159" watchObservedRunningTime="2025-10-08 13:35:29.444749039 +0000 UTC m=+1031.222130806" Oct 08 13:35:29 crc kubenswrapper[5065]: I1008 13:35:29.454918 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.408226 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"050c0e99-7984-43be-8701-84602f0c9294","Type":"ContainerStarted","Data":"4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e"} Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.439790 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.386565926 podStartE2EDuration="26.439774743s" podCreationTimestamp="2025-10-08 13:35:04 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.64865482 +0000 UTC m=+1017.426036577" lastFinishedPulling="2025-10-08 13:35:23.701863647 +0000 UTC m=+1025.479245394" observedRunningTime="2025-10-08 13:35:30.438721794 +0000 UTC m=+1032.216103551" watchObservedRunningTime="2025-10-08 13:35:30.439774743 +0000 UTC m=+1032.217156500" Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.603226 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.603484 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.611644 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.686197 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.714694 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:30 crc kubenswrapper[5065]: I1008 13:35:30.777322 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-9dkwf"] Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.416913 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" podUID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerName="dnsmasq-dns" containerID="cri-o://d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539" gracePeriod=10 Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.454784 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.460819 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.796679 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 08 13:35:31 crc kubenswrapper[5065]: E1008 13:35:31.797648 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerName="dnsmasq-dns" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.797668 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerName="dnsmasq-dns" Oct 08 13:35:31 crc kubenswrapper[5065]: E1008 13:35:31.797679 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerName="init" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.797685 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerName="init" Oct 08 13:35:31 crc kubenswrapper[5065]: E1008 13:35:31.797715 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerName="dnsmasq-dns" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.797723 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerName="dnsmasq-dns" Oct 08 13:35:31 crc kubenswrapper[5065]: E1008 13:35:31.797734 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerName="init" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.797739 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerName="init" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.797900 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86bcb56-84dd-44cc-9e43-07e603bdbb6b" containerName="dnsmasq-dns" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.797922 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9e9334-4e87-400b-b5e7-9ca7d7293233" containerName="dnsmasq-dns" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.799908 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.805325 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.806803 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xnm48" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.807002 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.807052 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.813560 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.873763 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.873833 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8xtv\" (UniqueName: \"kubernetes.io/projected/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-kube-api-access-n8xtv\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.873897 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.873959 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.873998 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-config\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.874025 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-scripts\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.874052 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.885960 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975068 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-config\") pod \"94e16ff3-4737-4183-9939-57ad3d01b4e6\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975176 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzlz\" (UniqueName: \"kubernetes.io/projected/94e16ff3-4737-4183-9939-57ad3d01b4e6-kube-api-access-4bzlz\") pod \"94e16ff3-4737-4183-9939-57ad3d01b4e6\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975266 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-ovsdbserver-sb\") pod \"94e16ff3-4737-4183-9939-57ad3d01b4e6\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975308 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-dns-svc\") pod \"94e16ff3-4737-4183-9939-57ad3d01b4e6\" (UID: \"94e16ff3-4737-4183-9939-57ad3d01b4e6\") " Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975620 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975697 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975757 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8xtv\" (UniqueName: \"kubernetes.io/projected/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-kube-api-access-n8xtv\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975801 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975879 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975922 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-config\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.975956 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-scripts\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.977196 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.977391 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-config\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.977373 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-scripts\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.981185 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e16ff3-4737-4183-9939-57ad3d01b4e6-kube-api-access-4bzlz" (OuterVolumeSpecName: "kube-api-access-4bzlz") pod "94e16ff3-4737-4183-9939-57ad3d01b4e6" (UID: "94e16ff3-4737-4183-9939-57ad3d01b4e6"). InnerVolumeSpecName "kube-api-access-4bzlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.981428 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.981478 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.990683 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:31 crc kubenswrapper[5065]: I1008 13:35:31.998130 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8xtv\" (UniqueName: \"kubernetes.io/projected/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-kube-api-access-n8xtv\") pod \"ovn-northd-0\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " pod="openstack/ovn-northd-0" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.019718 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-config" (OuterVolumeSpecName: "config") pod "94e16ff3-4737-4183-9939-57ad3d01b4e6" (UID: "94e16ff3-4737-4183-9939-57ad3d01b4e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.020250 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94e16ff3-4737-4183-9939-57ad3d01b4e6" (UID: "94e16ff3-4737-4183-9939-57ad3d01b4e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.025141 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94e16ff3-4737-4183-9939-57ad3d01b4e6" (UID: "94e16ff3-4737-4183-9939-57ad3d01b4e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.077478 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bzlz\" (UniqueName: \"kubernetes.io/projected/94e16ff3-4737-4183-9939-57ad3d01b4e6-kube-api-access-4bzlz\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.077518 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.077530 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.077540 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e16ff3-4737-4183-9939-57ad3d01b4e6-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.120427 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.425522 5065 generic.go:334] "Generic (PLEG): container finished" podID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerID="d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539" exitCode=0 Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.425588 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.425608 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" event={"ID":"94e16ff3-4737-4183-9939-57ad3d01b4e6","Type":"ContainerDied","Data":"d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539"} Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.426020 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-9dkwf" event={"ID":"94e16ff3-4737-4183-9939-57ad3d01b4e6","Type":"ContainerDied","Data":"5fa33eea4788bf195447e6217d827e706fa447bee0ed221728809eb8f4ace262"} Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.426066 5065 scope.go:117] "RemoveContainer" containerID="d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.451915 5065 scope.go:117] "RemoveContainer" containerID="b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.466059 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-9dkwf"] Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.472031 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-9dkwf"] Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.489469 5065 scope.go:117] "RemoveContainer" containerID="d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539" Oct 08 13:35:32 crc kubenswrapper[5065]: E1008 13:35:32.489956 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539\": container with ID starting with d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539 not found: ID does not exist" containerID="d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.490003 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539"} err="failed to get container status \"d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539\": rpc error: code = NotFound desc = could not find container \"d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539\": container with ID starting with d5a01442258a732b5ae5a33a4f838123c42623b8e574df6dcee750a2e799f539 not found: ID does not exist" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.490035 5065 scope.go:117] "RemoveContainer" containerID="b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4" Oct 08 13:35:32 crc kubenswrapper[5065]: E1008 13:35:32.490380 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4\": container with ID starting with b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4 not found: ID does not exist" containerID="b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.490409 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4"} err="failed to get container status \"b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4\": rpc error: code = NotFound desc = could not find container \"b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4\": container with ID starting with b6f46b128d1e0b9de254e07c5206bd59417640f171092abbbd44f5999271e5c4 not found: ID does not exist" Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.610619 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 13:35:32 crc kubenswrapper[5065]: W1008 13:35:32.622234 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c2f3965_f057_4b1d_bbc9_7235ac48ed49.slice/crio-5b32d2e0dacbb713ee7c85badd4d4089098997891f4ca797b74905c8a9b33eed WatchSource:0}: Error finding container 5b32d2e0dacbb713ee7c85badd4d4089098997891f4ca797b74905c8a9b33eed: Status 404 returned error can't find the container with id 5b32d2e0dacbb713ee7c85badd4d4089098997891f4ca797b74905c8a9b33eed Oct 08 13:35:32 crc kubenswrapper[5065]: I1008 13:35:32.881939 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e16ff3-4737-4183-9939-57ad3d01b4e6" path="/var/lib/kubelet/pods/94e16ff3-4737-4183-9939-57ad3d01b4e6/volumes" Oct 08 13:35:33 crc kubenswrapper[5065]: I1008 13:35:33.435677 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c2f3965-f057-4b1d-bbc9-7235ac48ed49","Type":"ContainerStarted","Data":"5b32d2e0dacbb713ee7c85badd4d4089098997891f4ca797b74905c8a9b33eed"} Oct 08 13:35:34 crc kubenswrapper[5065]: I1008 13:35:34.446171 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c2f3965-f057-4b1d-bbc9-7235ac48ed49","Type":"ContainerStarted","Data":"2c302eb0bc6fc03213ec7ffaa2e249422a78eeddefdda92efac176790ead6fa9"} Oct 08 13:35:34 crc kubenswrapper[5065]: I1008 13:35:34.446966 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 08 13:35:34 crc kubenswrapper[5065]: I1008 13:35:34.447035 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c2f3965-f057-4b1d-bbc9-7235ac48ed49","Type":"ContainerStarted","Data":"cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe"} Oct 08 13:35:34 crc kubenswrapper[5065]: I1008 13:35:34.467389 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.351342342 podStartE2EDuration="3.467367183s" podCreationTimestamp="2025-10-08 13:35:31 +0000 UTC" firstStartedPulling="2025-10-08 13:35:32.627916149 +0000 UTC m=+1034.405297906" lastFinishedPulling="2025-10-08 13:35:33.74394099 +0000 UTC m=+1035.521322747" observedRunningTime="2025-10-08 13:35:34.461713237 +0000 UTC m=+1036.239095034" watchObservedRunningTime="2025-10-08 13:35:34.467367183 +0000 UTC m=+1036.244748940" Oct 08 13:35:35 crc kubenswrapper[5065]: I1008 13:35:35.913688 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 08 13:35:35 crc kubenswrapper[5065]: I1008 13:35:35.914008 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 08 13:35:35 crc kubenswrapper[5065]: I1008 13:35:35.952930 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 08 13:35:36 crc kubenswrapper[5065]: I1008 13:35:36.007171 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:36 crc kubenswrapper[5065]: I1008 13:35:36.007361 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:36 crc kubenswrapper[5065]: I1008 13:35:36.079763 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:36 crc kubenswrapper[5065]: I1008 13:35:36.513121 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 08 13:35:36 crc kubenswrapper[5065]: I1008 13:35:36.520887 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.357039 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.503697 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-tfb6k"] Oct 08 13:35:38 crc kubenswrapper[5065]: E1008 13:35:38.503997 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerName="dnsmasq-dns" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.504010 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerName="dnsmasq-dns" Oct 08 13:35:38 crc kubenswrapper[5065]: E1008 13:35:38.504030 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerName="init" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.512550 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerName="init" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.512949 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e16ff3-4737-4183-9939-57ad3d01b4e6" containerName="dnsmasq-dns" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.513977 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.532239 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-tfb6k"] Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.580108 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.580184 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbprb\" (UniqueName: \"kubernetes.io/projected/a2151f29-c70c-44f5-a08d-39f9238778d5-kube-api-access-xbprb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.580235 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.580258 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.580302 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-config\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.681762 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.681814 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbprb\" (UniqueName: \"kubernetes.io/projected/a2151f29-c70c-44f5-a08d-39f9238778d5-kube-api-access-xbprb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.681862 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.681886 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.681922 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-config\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.682661 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.682868 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.683369 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.683490 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-config\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.718794 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbprb\" (UniqueName: \"kubernetes.io/projected/a2151f29-c70c-44f5-a08d-39f9238778d5-kube-api-access-xbprb\") pod \"dnsmasq-dns-7b587f8db7-tfb6k\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:38 crc kubenswrapper[5065]: I1008 13:35:38.862506 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.328271 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-tfb6k"] Oct 08 13:35:39 crc kubenswrapper[5065]: W1008 13:35:39.330755 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2151f29_c70c_44f5_a08d_39f9238778d5.slice/crio-c32c3afdaf2ca258d795b3f8730f4b232cd9ef38dbcffffa78398f2df3222c34 WatchSource:0}: Error finding container c32c3afdaf2ca258d795b3f8730f4b232cd9ef38dbcffffa78398f2df3222c34: Status 404 returned error can't find the container with id c32c3afdaf2ca258d795b3f8730f4b232cd9ef38dbcffffa78398f2df3222c34 Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.495130 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" event={"ID":"a2151f29-c70c-44f5-a08d-39f9238778d5","Type":"ContainerStarted","Data":"c32c3afdaf2ca258d795b3f8730f4b232cd9ef38dbcffffa78398f2df3222c34"} Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.615730 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.621479 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.623220 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qvjhg" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.626731 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.626910 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.627016 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.665736 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.697979 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-lock\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.698032 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-cache\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.698086 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhff\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-kube-api-access-vfhff\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.698114 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.698138 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.799520 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-lock\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.799578 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-cache\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.799634 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfhff\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-kube-api-access-vfhff\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.799663 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.799685 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: E1008 13:35:39.799880 5065 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 13:35:39 crc kubenswrapper[5065]: E1008 13:35:39.799894 5065 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 13:35:39 crc kubenswrapper[5065]: E1008 13:35:39.799940 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift podName:19063d41-be34-463b-8bb7-d45f7d804602 nodeName:}" failed. No retries permitted until 2025-10-08 13:35:40.299923398 +0000 UTC m=+1042.077305145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift") pod "swift-storage-0" (UID: "19063d41-be34-463b-8bb7-d45f7d804602") : configmap "swift-ring-files" not found Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.800018 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.800061 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-cache\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.800899 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-lock\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.820564 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfhff\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-kube-api-access-vfhff\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:39 crc kubenswrapper[5065]: I1008 13:35:39.826119 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.111963 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6dp7h"] Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.113348 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.114917 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.115058 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.116124 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.129715 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6dp7h"] Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307451 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-combined-ca-bundle\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307562 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-etc-swift\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307596 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8r7\" (UniqueName: \"kubernetes.io/projected/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-kube-api-access-rl8r7\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307655 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-swiftconf\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307719 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307758 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-ring-data-devices\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307820 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-scripts\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.307894 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-dispersionconf\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: E1008 13:35:40.307899 5065 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 13:35:40 crc kubenswrapper[5065]: E1008 13:35:40.307926 5065 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 13:35:40 crc kubenswrapper[5065]: E1008 13:35:40.308004 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift podName:19063d41-be34-463b-8bb7-d45f7d804602 nodeName:}" failed. No retries permitted until 2025-10-08 13:35:41.307987854 +0000 UTC m=+1043.085369611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift") pod "swift-storage-0" (UID: "19063d41-be34-463b-8bb7-d45f7d804602") : configmap "swift-ring-files" not found Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.409182 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-swiftconf\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.409306 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-ring-data-devices\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.409455 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-scripts\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.409541 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-dispersionconf\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.409586 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-combined-ca-bundle\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.409641 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-etc-swift\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.409679 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl8r7\" (UniqueName: \"kubernetes.io/projected/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-kube-api-access-rl8r7\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.410600 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-etc-swift\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.410995 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-ring-data-devices\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.412505 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-scripts\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.415192 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-dispersionconf\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.415274 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-combined-ca-bundle\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.427016 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-swiftconf\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.427840 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl8r7\" (UniqueName: \"kubernetes.io/projected/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-kube-api-access-rl8r7\") pod \"swift-ring-rebalance-6dp7h\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.432003 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:40 crc kubenswrapper[5065]: I1008 13:35:40.840881 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6dp7h"] Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.325278 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:41 crc kubenswrapper[5065]: E1008 13:35:41.325565 5065 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 13:35:41 crc kubenswrapper[5065]: E1008 13:35:41.325837 5065 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 13:35:41 crc kubenswrapper[5065]: E1008 13:35:41.325908 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift podName:19063d41-be34-463b-8bb7-d45f7d804602 nodeName:}" failed. No retries permitted until 2025-10-08 13:35:43.325885534 +0000 UTC m=+1045.103267301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift") pod "swift-storage-0" (UID: "19063d41-be34-463b-8bb7-d45f7d804602") : configmap "swift-ring-files" not found Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.510463 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6dp7h" event={"ID":"79be2020-2dbb-4cb0-bba8-ee4e78a3786a","Type":"ContainerStarted","Data":"ab601993f75062047d690795a6769d123a3752092ddbab9d62d216bc1d93da11"} Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.712715 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-frtzj"] Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.714358 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-frtzj" Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.721625 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-frtzj"] Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.730169 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49wtm\" (UniqueName: \"kubernetes.io/projected/47c21010-ce00-4b6c-8e9d-2e407bb703ed-kube-api-access-49wtm\") pod \"glance-db-create-frtzj\" (UID: \"47c21010-ce00-4b6c-8e9d-2e407bb703ed\") " pod="openstack/glance-db-create-frtzj" Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.831545 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49wtm\" (UniqueName: \"kubernetes.io/projected/47c21010-ce00-4b6c-8e9d-2e407bb703ed-kube-api-access-49wtm\") pod \"glance-db-create-frtzj\" (UID: \"47c21010-ce00-4b6c-8e9d-2e407bb703ed\") " pod="openstack/glance-db-create-frtzj" Oct 08 13:35:41 crc kubenswrapper[5065]: I1008 13:35:41.849972 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49wtm\" (UniqueName: \"kubernetes.io/projected/47c21010-ce00-4b6c-8e9d-2e407bb703ed-kube-api-access-49wtm\") pod \"glance-db-create-frtzj\" (UID: \"47c21010-ce00-4b6c-8e9d-2e407bb703ed\") " pod="openstack/glance-db-create-frtzj" Oct 08 13:35:42 crc kubenswrapper[5065]: I1008 13:35:42.032677 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-frtzj" Oct 08 13:35:42 crc kubenswrapper[5065]: I1008 13:35:42.458642 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-frtzj"] Oct 08 13:35:42 crc kubenswrapper[5065]: I1008 13:35:42.519904 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-frtzj" event={"ID":"47c21010-ce00-4b6c-8e9d-2e407bb703ed","Type":"ContainerStarted","Data":"8e20ce1eade0ffee585ea7cf9443836906d802abf7afb9d45c9a64489553b537"} Oct 08 13:35:43 crc kubenswrapper[5065]: I1008 13:35:43.365219 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:43 crc kubenswrapper[5065]: E1008 13:35:43.365393 5065 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 13:35:43 crc kubenswrapper[5065]: E1008 13:35:43.365602 5065 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 13:35:43 crc kubenswrapper[5065]: E1008 13:35:43.365659 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift podName:19063d41-be34-463b-8bb7-d45f7d804602 nodeName:}" failed. No retries permitted until 2025-10-08 13:35:47.36564059 +0000 UTC m=+1049.143022357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift") pod "swift-storage-0" (UID: "19063d41-be34-463b-8bb7-d45f7d804602") : configmap "swift-ring-files" not found Oct 08 13:35:43 crc kubenswrapper[5065]: I1008 13:35:43.531222 5065 generic.go:334] "Generic (PLEG): container finished" podID="47c21010-ce00-4b6c-8e9d-2e407bb703ed" containerID="d653154c9971e3cfa954bc36e38b40b4d923a999ac76c039bc335c1184665c31" exitCode=0 Oct 08 13:35:43 crc kubenswrapper[5065]: I1008 13:35:43.531326 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-frtzj" event={"ID":"47c21010-ce00-4b6c-8e9d-2e407bb703ed","Type":"ContainerDied","Data":"d653154c9971e3cfa954bc36e38b40b4d923a999ac76c039bc335c1184665c31"} Oct 08 13:35:43 crc kubenswrapper[5065]: I1008 13:35:43.533712 5065 generic.go:334] "Generic (PLEG): container finished" podID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerID="577d4ddff74eba3740d2624e787b781d0b31eed04d1ae22f625975693ab7293f" exitCode=0 Oct 08 13:35:43 crc kubenswrapper[5065]: I1008 13:35:43.533773 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" event={"ID":"a2151f29-c70c-44f5-a08d-39f9238778d5","Type":"ContainerDied","Data":"577d4ddff74eba3740d2624e787b781d0b31eed04d1ae22f625975693ab7293f"} Oct 08 13:35:45 crc kubenswrapper[5065]: I1008 13:35:45.554888 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-frtzj" event={"ID":"47c21010-ce00-4b6c-8e9d-2e407bb703ed","Type":"ContainerDied","Data":"8e20ce1eade0ffee585ea7cf9443836906d802abf7afb9d45c9a64489553b537"} Oct 08 13:35:45 crc kubenswrapper[5065]: I1008 13:35:45.555518 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e20ce1eade0ffee585ea7cf9443836906d802abf7afb9d45c9a64489553b537" Oct 08 13:35:45 crc kubenswrapper[5065]: I1008 13:35:45.584988 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-frtzj" Oct 08 13:35:45 crc kubenswrapper[5065]: I1008 13:35:45.708317 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49wtm\" (UniqueName: \"kubernetes.io/projected/47c21010-ce00-4b6c-8e9d-2e407bb703ed-kube-api-access-49wtm\") pod \"47c21010-ce00-4b6c-8e9d-2e407bb703ed\" (UID: \"47c21010-ce00-4b6c-8e9d-2e407bb703ed\") " Oct 08 13:35:45 crc kubenswrapper[5065]: I1008 13:35:45.714675 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c21010-ce00-4b6c-8e9d-2e407bb703ed-kube-api-access-49wtm" (OuterVolumeSpecName: "kube-api-access-49wtm") pod "47c21010-ce00-4b6c-8e9d-2e407bb703ed" (UID: "47c21010-ce00-4b6c-8e9d-2e407bb703ed"). InnerVolumeSpecName "kube-api-access-49wtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:45 crc kubenswrapper[5065]: I1008 13:35:45.810955 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49wtm\" (UniqueName: \"kubernetes.io/projected/47c21010-ce00-4b6c-8e9d-2e407bb703ed-kube-api-access-49wtm\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.030832 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dfd72"] Oct 08 13:35:46 crc kubenswrapper[5065]: E1008 13:35:46.031475 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c21010-ce00-4b6c-8e9d-2e407bb703ed" containerName="mariadb-database-create" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.031567 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c21010-ce00-4b6c-8e9d-2e407bb703ed" containerName="mariadb-database-create" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.031827 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c21010-ce00-4b6c-8e9d-2e407bb703ed" containerName="mariadb-database-create" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.032372 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dfd72" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.086279 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dfd72"] Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.224333 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfmn\" (UniqueName: \"kubernetes.io/projected/022c8ac2-ac4f-4994-949d-2f14030e1bda-kube-api-access-kkfmn\") pod \"keystone-db-create-dfd72\" (UID: \"022c8ac2-ac4f-4994-949d-2f14030e1bda\") " pod="openstack/keystone-db-create-dfd72" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.326337 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfmn\" (UniqueName: \"kubernetes.io/projected/022c8ac2-ac4f-4994-949d-2f14030e1bda-kube-api-access-kkfmn\") pod \"keystone-db-create-dfd72\" (UID: \"022c8ac2-ac4f-4994-949d-2f14030e1bda\") " pod="openstack/keystone-db-create-dfd72" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.342102 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfmn\" (UniqueName: \"kubernetes.io/projected/022c8ac2-ac4f-4994-949d-2f14030e1bda-kube-api-access-kkfmn\") pod \"keystone-db-create-dfd72\" (UID: \"022c8ac2-ac4f-4994-949d-2f14030e1bda\") " pod="openstack/keystone-db-create-dfd72" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.348887 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dfd72" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.356581 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7t8kf"] Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.357846 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7t8kf" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.381628 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7t8kf"] Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.428298 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7xj\" (UniqueName: \"kubernetes.io/projected/ef5f3c36-30db-4174-90a1-ac7dd45f2207-kube-api-access-8r7xj\") pod \"placement-db-create-7t8kf\" (UID: \"ef5f3c36-30db-4174-90a1-ac7dd45f2207\") " pod="openstack/placement-db-create-7t8kf" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.529902 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7xj\" (UniqueName: \"kubernetes.io/projected/ef5f3c36-30db-4174-90a1-ac7dd45f2207-kube-api-access-8r7xj\") pod \"placement-db-create-7t8kf\" (UID: \"ef5f3c36-30db-4174-90a1-ac7dd45f2207\") " pod="openstack/placement-db-create-7t8kf" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.553544 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7xj\" (UniqueName: \"kubernetes.io/projected/ef5f3c36-30db-4174-90a1-ac7dd45f2207-kube-api-access-8r7xj\") pod \"placement-db-create-7t8kf\" (UID: \"ef5f3c36-30db-4174-90a1-ac7dd45f2207\") " pod="openstack/placement-db-create-7t8kf" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.566310 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" event={"ID":"a2151f29-c70c-44f5-a08d-39f9238778d5","Type":"ContainerStarted","Data":"864c4bcee9aca12d289071331bb7d75eccb4e05b423685c527460123db301ef1"} Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.567349 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.568900 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-frtzj" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.571771 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6dp7h" event={"ID":"79be2020-2dbb-4cb0-bba8-ee4e78a3786a","Type":"ContainerStarted","Data":"dea03e50a71ed799f2a49f0eb48790a19814692deaadf712ca9793f2249fcd3e"} Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.601540 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" podStartSLOduration=8.60151927 podStartE2EDuration="8.60151927s" podCreationTimestamp="2025-10-08 13:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:35:46.588548695 +0000 UTC m=+1048.365930452" watchObservedRunningTime="2025-10-08 13:35:46.60151927 +0000 UTC m=+1048.378901027" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.615471 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6dp7h" podStartSLOduration=2.041225779 podStartE2EDuration="6.615448541s" podCreationTimestamp="2025-10-08 13:35:40 +0000 UTC" firstStartedPulling="2025-10-08 13:35:40.85209829 +0000 UTC m=+1042.629480047" lastFinishedPulling="2025-10-08 13:35:45.426321052 +0000 UTC m=+1047.203702809" observedRunningTime="2025-10-08 13:35:46.609277417 +0000 UTC m=+1048.386659194" watchObservedRunningTime="2025-10-08 13:35:46.615448541 +0000 UTC m=+1048.392830298" Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.632475 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dfd72"] Oct 08 13:35:46 crc kubenswrapper[5065]: I1008 13:35:46.749683 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7t8kf" Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.175109 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.189955 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7t8kf"] Oct 08 13:35:47 crc kubenswrapper[5065]: W1008 13:35:47.200179 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef5f3c36_30db_4174_90a1_ac7dd45f2207.slice/crio-3dbb6a9ac9937e5c739d04356c58dc5c2434a71002aa2d23261224ad5f18b7a9 WatchSource:0}: Error finding container 3dbb6a9ac9937e5c739d04356c58dc5c2434a71002aa2d23261224ad5f18b7a9: Status 404 returned error can't find the container with id 3dbb6a9ac9937e5c739d04356c58dc5c2434a71002aa2d23261224ad5f18b7a9 Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.442962 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:47 crc kubenswrapper[5065]: E1008 13:35:47.443144 5065 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 13:35:47 crc kubenswrapper[5065]: E1008 13:35:47.443186 5065 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 13:35:47 crc kubenswrapper[5065]: E1008 13:35:47.443255 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift podName:19063d41-be34-463b-8bb7-d45f7d804602 nodeName:}" failed. No retries permitted until 2025-10-08 13:35:55.443230419 +0000 UTC m=+1057.220612176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift") pod "swift-storage-0" (UID: "19063d41-be34-463b-8bb7-d45f7d804602") : configmap "swift-ring-files" not found Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.577830 5065 generic.go:334] "Generic (PLEG): container finished" podID="022c8ac2-ac4f-4994-949d-2f14030e1bda" containerID="912bf4ebfd4305ec91c794cfaf719918b4246027bd8bb8e15acf893c8876449d" exitCode=0 Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.577906 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dfd72" event={"ID":"022c8ac2-ac4f-4994-949d-2f14030e1bda","Type":"ContainerDied","Data":"912bf4ebfd4305ec91c794cfaf719918b4246027bd8bb8e15acf893c8876449d"} Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.577938 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dfd72" event={"ID":"022c8ac2-ac4f-4994-949d-2f14030e1bda","Type":"ContainerStarted","Data":"8aa2ff291157053cd4709590659e796a06c5783428dca4add4d6c02fa8811ce3"} Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.580026 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7t8kf" event={"ID":"ef5f3c36-30db-4174-90a1-ac7dd45f2207","Type":"ContainerStarted","Data":"b282511f72530c5e6ac5ab13e197f2eae05c1a6b4a58b67bb604f364e59ff084"} Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.580061 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7t8kf" event={"ID":"ef5f3c36-30db-4174-90a1-ac7dd45f2207","Type":"ContainerStarted","Data":"3dbb6a9ac9937e5c739d04356c58dc5c2434a71002aa2d23261224ad5f18b7a9"} Oct 08 13:35:47 crc kubenswrapper[5065]: I1008 13:35:47.614718 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-7t8kf" podStartSLOduration=1.6146910239999999 podStartE2EDuration="1.614691024s" podCreationTimestamp="2025-10-08 13:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:35:47.609343971 +0000 UTC m=+1049.386725728" watchObservedRunningTime="2025-10-08 13:35:47.614691024 +0000 UTC m=+1049.392072781" Oct 08 13:35:48 crc kubenswrapper[5065]: I1008 13:35:48.588620 5065 generic.go:334] "Generic (PLEG): container finished" podID="ef5f3c36-30db-4174-90a1-ac7dd45f2207" containerID="b282511f72530c5e6ac5ab13e197f2eae05c1a6b4a58b67bb604f364e59ff084" exitCode=0 Oct 08 13:35:48 crc kubenswrapper[5065]: I1008 13:35:48.589003 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7t8kf" event={"ID":"ef5f3c36-30db-4174-90a1-ac7dd45f2207","Type":"ContainerDied","Data":"b282511f72530c5e6ac5ab13e197f2eae05c1a6b4a58b67bb604f364e59ff084"} Oct 08 13:35:48 crc kubenswrapper[5065]: I1008 13:35:48.971291 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dfd72" Oct 08 13:35:49 crc kubenswrapper[5065]: I1008 13:35:49.087843 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfmn\" (UniqueName: \"kubernetes.io/projected/022c8ac2-ac4f-4994-949d-2f14030e1bda-kube-api-access-kkfmn\") pod \"022c8ac2-ac4f-4994-949d-2f14030e1bda\" (UID: \"022c8ac2-ac4f-4994-949d-2f14030e1bda\") " Oct 08 13:35:49 crc kubenswrapper[5065]: I1008 13:35:49.093007 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022c8ac2-ac4f-4994-949d-2f14030e1bda-kube-api-access-kkfmn" (OuterVolumeSpecName: "kube-api-access-kkfmn") pod "022c8ac2-ac4f-4994-949d-2f14030e1bda" (UID: "022c8ac2-ac4f-4994-949d-2f14030e1bda"). InnerVolumeSpecName "kube-api-access-kkfmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:49 crc kubenswrapper[5065]: I1008 13:35:49.190175 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfmn\" (UniqueName: \"kubernetes.io/projected/022c8ac2-ac4f-4994-949d-2f14030e1bda-kube-api-access-kkfmn\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:49 crc kubenswrapper[5065]: I1008 13:35:49.599710 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dfd72" event={"ID":"022c8ac2-ac4f-4994-949d-2f14030e1bda","Type":"ContainerDied","Data":"8aa2ff291157053cd4709590659e796a06c5783428dca4add4d6c02fa8811ce3"} Oct 08 13:35:49 crc kubenswrapper[5065]: I1008 13:35:49.599772 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aa2ff291157053cd4709590659e796a06c5783428dca4add4d6c02fa8811ce3" Oct 08 13:35:49 crc kubenswrapper[5065]: I1008 13:35:49.599738 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dfd72" Oct 08 13:35:49 crc kubenswrapper[5065]: I1008 13:35:49.909647 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7t8kf" Oct 08 13:35:50 crc kubenswrapper[5065]: I1008 13:35:50.005373 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r7xj\" (UniqueName: \"kubernetes.io/projected/ef5f3c36-30db-4174-90a1-ac7dd45f2207-kube-api-access-8r7xj\") pod \"ef5f3c36-30db-4174-90a1-ac7dd45f2207\" (UID: \"ef5f3c36-30db-4174-90a1-ac7dd45f2207\") " Oct 08 13:35:50 crc kubenswrapper[5065]: I1008 13:35:50.011286 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef5f3c36-30db-4174-90a1-ac7dd45f2207-kube-api-access-8r7xj" (OuterVolumeSpecName: "kube-api-access-8r7xj") pod "ef5f3c36-30db-4174-90a1-ac7dd45f2207" (UID: "ef5f3c36-30db-4174-90a1-ac7dd45f2207"). InnerVolumeSpecName "kube-api-access-8r7xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:50 crc kubenswrapper[5065]: I1008 13:35:50.106964 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r7xj\" (UniqueName: \"kubernetes.io/projected/ef5f3c36-30db-4174-90a1-ac7dd45f2207-kube-api-access-8r7xj\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:50 crc kubenswrapper[5065]: I1008 13:35:50.607990 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7t8kf" event={"ID":"ef5f3c36-30db-4174-90a1-ac7dd45f2207","Type":"ContainerDied","Data":"3dbb6a9ac9937e5c739d04356c58dc5c2434a71002aa2d23261224ad5f18b7a9"} Oct 08 13:35:50 crc kubenswrapper[5065]: I1008 13:35:50.609041 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dbb6a9ac9937e5c739d04356c58dc5c2434a71002aa2d23261224ad5f18b7a9" Oct 08 13:35:50 crc kubenswrapper[5065]: I1008 13:35:50.608049 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7t8kf" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.682936 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8439-account-create-fc44x"] Oct 08 13:35:51 crc kubenswrapper[5065]: E1008 13:35:51.684775 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5f3c36-30db-4174-90a1-ac7dd45f2207" containerName="mariadb-database-create" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.684953 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5f3c36-30db-4174-90a1-ac7dd45f2207" containerName="mariadb-database-create" Oct 08 13:35:51 crc kubenswrapper[5065]: E1008 13:35:51.685112 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022c8ac2-ac4f-4994-949d-2f14030e1bda" containerName="mariadb-database-create" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.685234 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="022c8ac2-ac4f-4994-949d-2f14030e1bda" containerName="mariadb-database-create" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.685636 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="022c8ac2-ac4f-4994-949d-2f14030e1bda" containerName="mariadb-database-create" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.685774 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5f3c36-30db-4174-90a1-ac7dd45f2207" containerName="mariadb-database-create" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.686696 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8439-account-create-fc44x" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.690104 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.696287 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8439-account-create-fc44x"] Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.736509 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxk6\" (UniqueName: \"kubernetes.io/projected/f3fcfe14-3fc3-4143-ba87-88695b643507-kube-api-access-5hxk6\") pod \"glance-8439-account-create-fc44x\" (UID: \"f3fcfe14-3fc3-4143-ba87-88695b643507\") " pod="openstack/glance-8439-account-create-fc44x" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.838091 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hxk6\" (UniqueName: \"kubernetes.io/projected/f3fcfe14-3fc3-4143-ba87-88695b643507-kube-api-access-5hxk6\") pod \"glance-8439-account-create-fc44x\" (UID: \"f3fcfe14-3fc3-4143-ba87-88695b643507\") " pod="openstack/glance-8439-account-create-fc44x" Oct 08 13:35:51 crc kubenswrapper[5065]: I1008 13:35:51.856333 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hxk6\" (UniqueName: \"kubernetes.io/projected/f3fcfe14-3fc3-4143-ba87-88695b643507-kube-api-access-5hxk6\") pod \"glance-8439-account-create-fc44x\" (UID: \"f3fcfe14-3fc3-4143-ba87-88695b643507\") " pod="openstack/glance-8439-account-create-fc44x" Oct 08 13:35:52 crc kubenswrapper[5065]: I1008 13:35:52.003452 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8439-account-create-fc44x" Oct 08 13:35:52 crc kubenswrapper[5065]: I1008 13:35:52.481161 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8439-account-create-fc44x"] Oct 08 13:35:52 crc kubenswrapper[5065]: W1008 13:35:52.547194 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3fcfe14_3fc3_4143_ba87_88695b643507.slice/crio-09fc4cf361129c434f924e9e44a57c55f8e02e979fdc80bc4bdeec30255c165c WatchSource:0}: Error finding container 09fc4cf361129c434f924e9e44a57c55f8e02e979fdc80bc4bdeec30255c165c: Status 404 returned error can't find the container with id 09fc4cf361129c434f924e9e44a57c55f8e02e979fdc80bc4bdeec30255c165c Oct 08 13:35:52 crc kubenswrapper[5065]: I1008 13:35:52.624329 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8439-account-create-fc44x" event={"ID":"f3fcfe14-3fc3-4143-ba87-88695b643507","Type":"ContainerStarted","Data":"09fc4cf361129c434f924e9e44a57c55f8e02e979fdc80bc4bdeec30255c165c"} Oct 08 13:35:52 crc kubenswrapper[5065]: I1008 13:35:52.626890 5065 generic.go:334] "Generic (PLEG): container finished" podID="79be2020-2dbb-4cb0-bba8-ee4e78a3786a" containerID="dea03e50a71ed799f2a49f0eb48790a19814692deaadf712ca9793f2249fcd3e" exitCode=0 Oct 08 13:35:52 crc kubenswrapper[5065]: I1008 13:35:52.626949 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6dp7h" event={"ID":"79be2020-2dbb-4cb0-bba8-ee4e78a3786a","Type":"ContainerDied","Data":"dea03e50a71ed799f2a49f0eb48790a19814692deaadf712ca9793f2249fcd3e"} Oct 08 13:35:53 crc kubenswrapper[5065]: I1008 13:35:53.637691 5065 generic.go:334] "Generic (PLEG): container finished" podID="f3fcfe14-3fc3-4143-ba87-88695b643507" containerID="2af10c1dd3063eb7dbf653e3d9f9df572dae8ef522034f34e2149cf77f826cbf" exitCode=0 Oct 08 13:35:53 crc kubenswrapper[5065]: I1008 13:35:53.637798 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8439-account-create-fc44x" event={"ID":"f3fcfe14-3fc3-4143-ba87-88695b643507","Type":"ContainerDied","Data":"2af10c1dd3063eb7dbf653e3d9f9df572dae8ef522034f34e2149cf77f826cbf"} Oct 08 13:35:53 crc kubenswrapper[5065]: I1008 13:35:53.865558 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:35:53 crc kubenswrapper[5065]: I1008 13:35:53.931600 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-8sfnw"] Oct 08 13:35:53 crc kubenswrapper[5065]: I1008 13:35:53.932131 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" podUID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerName="dnsmasq-dns" containerID="cri-o://626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30" gracePeriod=10 Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.052978 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.184145 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-swiftconf\") pod \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.184259 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-dispersionconf\") pod \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.184344 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-scripts\") pod \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.184427 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-combined-ca-bundle\") pod \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.184464 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl8r7\" (UniqueName: \"kubernetes.io/projected/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-kube-api-access-rl8r7\") pod \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.184509 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-etc-swift\") pod \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.184537 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-ring-data-devices\") pod \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\" (UID: \"79be2020-2dbb-4cb0-bba8-ee4e78a3786a\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.185502 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "79be2020-2dbb-4cb0-bba8-ee4e78a3786a" (UID: "79be2020-2dbb-4cb0-bba8-ee4e78a3786a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.186280 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "79be2020-2dbb-4cb0-bba8-ee4e78a3786a" (UID: "79be2020-2dbb-4cb0-bba8-ee4e78a3786a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.197261 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "79be2020-2dbb-4cb0-bba8-ee4e78a3786a" (UID: "79be2020-2dbb-4cb0-bba8-ee4e78a3786a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.207751 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-kube-api-access-rl8r7" (OuterVolumeSpecName: "kube-api-access-rl8r7") pod "79be2020-2dbb-4cb0-bba8-ee4e78a3786a" (UID: "79be2020-2dbb-4cb0-bba8-ee4e78a3786a"). InnerVolumeSpecName "kube-api-access-rl8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.210896 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "79be2020-2dbb-4cb0-bba8-ee4e78a3786a" (UID: "79be2020-2dbb-4cb0-bba8-ee4e78a3786a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.214391 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-scripts" (OuterVolumeSpecName: "scripts") pod "79be2020-2dbb-4cb0-bba8-ee4e78a3786a" (UID: "79be2020-2dbb-4cb0-bba8-ee4e78a3786a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.214604 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79be2020-2dbb-4cb0-bba8-ee4e78a3786a" (UID: "79be2020-2dbb-4cb0-bba8-ee4e78a3786a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.286684 5065 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.286715 5065 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.286727 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.286736 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.286747 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl8r7\" (UniqueName: \"kubernetes.io/projected/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-kube-api-access-rl8r7\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.286756 5065 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.286764 5065 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/79be2020-2dbb-4cb0-bba8-ee4e78a3786a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.321179 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.374929 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.374981 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.387514 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87dws\" (UniqueName: \"kubernetes.io/projected/03fcc65a-a29f-4453-b176-00b55369a0ba-kube-api-access-87dws\") pod \"03fcc65a-a29f-4453-b176-00b55369a0ba\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.387608 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-sb\") pod \"03fcc65a-a29f-4453-b176-00b55369a0ba\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.387697 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-nb\") pod \"03fcc65a-a29f-4453-b176-00b55369a0ba\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.387772 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-dns-svc\") pod \"03fcc65a-a29f-4453-b176-00b55369a0ba\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.387805 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-config\") pod \"03fcc65a-a29f-4453-b176-00b55369a0ba\" (UID: \"03fcc65a-a29f-4453-b176-00b55369a0ba\") " Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.390725 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03fcc65a-a29f-4453-b176-00b55369a0ba-kube-api-access-87dws" (OuterVolumeSpecName: "kube-api-access-87dws") pod "03fcc65a-a29f-4453-b176-00b55369a0ba" (UID: "03fcc65a-a29f-4453-b176-00b55369a0ba"). InnerVolumeSpecName "kube-api-access-87dws". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.422822 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03fcc65a-a29f-4453-b176-00b55369a0ba" (UID: "03fcc65a-a29f-4453-b176-00b55369a0ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.422838 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03fcc65a-a29f-4453-b176-00b55369a0ba" (UID: "03fcc65a-a29f-4453-b176-00b55369a0ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.423957 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03fcc65a-a29f-4453-b176-00b55369a0ba" (UID: "03fcc65a-a29f-4453-b176-00b55369a0ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.427949 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-config" (OuterVolumeSpecName: "config") pod "03fcc65a-a29f-4453-b176-00b55369a0ba" (UID: "03fcc65a-a29f-4453-b176-00b55369a0ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.490067 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87dws\" (UniqueName: \"kubernetes.io/projected/03fcc65a-a29f-4453-b176-00b55369a0ba-kube-api-access-87dws\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.490109 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.490122 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.490137 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.490147 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fcc65a-a29f-4453-b176-00b55369a0ba-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.650536 5065 generic.go:334] "Generic (PLEG): container finished" podID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerID="626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30" exitCode=0 Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.650596 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.650606 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" event={"ID":"03fcc65a-a29f-4453-b176-00b55369a0ba","Type":"ContainerDied","Data":"626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30"} Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.650650 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-8sfnw" event={"ID":"03fcc65a-a29f-4453-b176-00b55369a0ba","Type":"ContainerDied","Data":"57b8db48ac88b228e4b002fa339ab91d7710518b04e966c3af2e7f46661594df"} Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.650669 5065 scope.go:117] "RemoveContainer" containerID="626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.654086 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6dp7h" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.654081 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6dp7h" event={"ID":"79be2020-2dbb-4cb0-bba8-ee4e78a3786a","Type":"ContainerDied","Data":"ab601993f75062047d690795a6769d123a3752092ddbab9d62d216bc1d93da11"} Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.654129 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab601993f75062047d690795a6769d123a3752092ddbab9d62d216bc1d93da11" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.694478 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-8sfnw"] Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.698591 5065 scope.go:117] "RemoveContainer" containerID="2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.703507 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-8sfnw"] Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.729607 5065 scope.go:117] "RemoveContainer" containerID="626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30" Oct 08 13:35:54 crc kubenswrapper[5065]: E1008 13:35:54.730280 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30\": container with ID starting with 626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30 not found: ID does not exist" containerID="626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.730308 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30"} err="failed to get container status \"626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30\": rpc error: code = NotFound desc = could not find container \"626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30\": container with ID starting with 626781fc3a08476bf5908d3beed7acc53f9c1c46143ef6a9329d7f6954eeec30 not found: ID does not exist" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.730334 5065 scope.go:117] "RemoveContainer" containerID="2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6" Oct 08 13:35:54 crc kubenswrapper[5065]: E1008 13:35:54.730708 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6\": container with ID starting with 2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6 not found: ID does not exist" containerID="2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.730734 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6"} err="failed to get container status \"2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6\": rpc error: code = NotFound desc = could not find container \"2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6\": container with ID starting with 2e7761099fbe15c3394bd4aadffbe6d03083bb05722f6c63a09fc133fecf0bf6 not found: ID does not exist" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.889304 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03fcc65a-a29f-4453-b176-00b55369a0ba" path="/var/lib/kubelet/pods/03fcc65a-a29f-4453-b176-00b55369a0ba/volumes" Oct 08 13:35:54 crc kubenswrapper[5065]: I1008 13:35:54.968305 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8439-account-create-fc44x" Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.104743 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hxk6\" (UniqueName: \"kubernetes.io/projected/f3fcfe14-3fc3-4143-ba87-88695b643507-kube-api-access-5hxk6\") pod \"f3fcfe14-3fc3-4143-ba87-88695b643507\" (UID: \"f3fcfe14-3fc3-4143-ba87-88695b643507\") " Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.110693 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fcfe14-3fc3-4143-ba87-88695b643507-kube-api-access-5hxk6" (OuterVolumeSpecName: "kube-api-access-5hxk6") pod "f3fcfe14-3fc3-4143-ba87-88695b643507" (UID: "f3fcfe14-3fc3-4143-ba87-88695b643507"). InnerVolumeSpecName "kube-api-access-5hxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.206938 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hxk6\" (UniqueName: \"kubernetes.io/projected/f3fcfe14-3fc3-4143-ba87-88695b643507-kube-api-access-5hxk6\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.512358 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.520865 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"swift-storage-0\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " pod="openstack/swift-storage-0" Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.550265 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.664835 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8439-account-create-fc44x" Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.664846 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8439-account-create-fc44x" event={"ID":"f3fcfe14-3fc3-4143-ba87-88695b643507","Type":"ContainerDied","Data":"09fc4cf361129c434f924e9e44a57c55f8e02e979fdc80bc4bdeec30255c165c"} Oct 08 13:35:55 crc kubenswrapper[5065]: I1008 13:35:55.665206 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09fc4cf361129c434f924e9e44a57c55f8e02e979fdc80bc4bdeec30255c165c" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.082627 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 08 13:35:56 crc kubenswrapper[5065]: W1008 13:35:56.088603 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19063d41_be34_463b_8bb7_d45f7d804602.slice/crio-964753e35406738095c8766f59eb685720ecd5328d52e95ead61b58ba12eff82 WatchSource:0}: Error finding container 964753e35406738095c8766f59eb685720ecd5328d52e95ead61b58ba12eff82: Status 404 returned error can't find the container with id 964753e35406738095c8766f59eb685720ecd5328d52e95ead61b58ba12eff82 Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.173578 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4f23-account-create-qdwq2"] Oct 08 13:35:56 crc kubenswrapper[5065]: E1008 13:35:56.173903 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerName="init" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.173924 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerName="init" Oct 08 13:35:56 crc kubenswrapper[5065]: E1008 13:35:56.173933 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerName="dnsmasq-dns" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.173939 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerName="dnsmasq-dns" Oct 08 13:35:56 crc kubenswrapper[5065]: E1008 13:35:56.173968 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fcfe14-3fc3-4143-ba87-88695b643507" containerName="mariadb-account-create" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.173981 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fcfe14-3fc3-4143-ba87-88695b643507" containerName="mariadb-account-create" Oct 08 13:35:56 crc kubenswrapper[5065]: E1008 13:35:56.174004 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79be2020-2dbb-4cb0-bba8-ee4e78a3786a" containerName="swift-ring-rebalance" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.174012 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="79be2020-2dbb-4cb0-bba8-ee4e78a3786a" containerName="swift-ring-rebalance" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.174193 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="03fcc65a-a29f-4453-b176-00b55369a0ba" containerName="dnsmasq-dns" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.174207 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="79be2020-2dbb-4cb0-bba8-ee4e78a3786a" containerName="swift-ring-rebalance" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.174241 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fcfe14-3fc3-4143-ba87-88695b643507" containerName="mariadb-account-create" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.174876 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f23-account-create-qdwq2" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.177785 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.180804 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4f23-account-create-qdwq2"] Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.224698 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzn7\" (UniqueName: \"kubernetes.io/projected/5a00958a-1aab-44b8-9e6b-13a09ca60d99-kube-api-access-wlzn7\") pod \"keystone-4f23-account-create-qdwq2\" (UID: \"5a00958a-1aab-44b8-9e6b-13a09ca60d99\") " pod="openstack/keystone-4f23-account-create-qdwq2" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.325981 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlzn7\" (UniqueName: \"kubernetes.io/projected/5a00958a-1aab-44b8-9e6b-13a09ca60d99-kube-api-access-wlzn7\") pod \"keystone-4f23-account-create-qdwq2\" (UID: \"5a00958a-1aab-44b8-9e6b-13a09ca60d99\") " pod="openstack/keystone-4f23-account-create-qdwq2" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.347820 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlzn7\" (UniqueName: \"kubernetes.io/projected/5a00958a-1aab-44b8-9e6b-13a09ca60d99-kube-api-access-wlzn7\") pod \"keystone-4f23-account-create-qdwq2\" (UID: \"5a00958a-1aab-44b8-9e6b-13a09ca60d99\") " pod="openstack/keystone-4f23-account-create-qdwq2" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.477713 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fb78-account-create-zw8ns"] Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.478892 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb78-account-create-zw8ns" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.482801 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.488939 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb78-account-create-zw8ns"] Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.503543 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f23-account-create-qdwq2" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.528773 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgpw\" (UniqueName: \"kubernetes.io/projected/a1486a5d-5fb5-4055-b5e2-a9eceb919f29-kube-api-access-vkgpw\") pod \"placement-fb78-account-create-zw8ns\" (UID: \"a1486a5d-5fb5-4055-b5e2-a9eceb919f29\") " pod="openstack/placement-fb78-account-create-zw8ns" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.630494 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgpw\" (UniqueName: \"kubernetes.io/projected/a1486a5d-5fb5-4055-b5e2-a9eceb919f29-kube-api-access-vkgpw\") pod \"placement-fb78-account-create-zw8ns\" (UID: \"a1486a5d-5fb5-4055-b5e2-a9eceb919f29\") " pod="openstack/placement-fb78-account-create-zw8ns" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.647059 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgpw\" (UniqueName: \"kubernetes.io/projected/a1486a5d-5fb5-4055-b5e2-a9eceb919f29-kube-api-access-vkgpw\") pod \"placement-fb78-account-create-zw8ns\" (UID: \"a1486a5d-5fb5-4055-b5e2-a9eceb919f29\") " pod="openstack/placement-fb78-account-create-zw8ns" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.676743 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"964753e35406738095c8766f59eb685720ecd5328d52e95ead61b58ba12eff82"} Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.757356 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dtvgx"] Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.758479 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.760176 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jcsf2" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.760240 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.768599 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dtvgx"] Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.790406 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4f23-account-create-qdwq2"] Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.803563 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb78-account-create-zw8ns" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.836808 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445nn\" (UniqueName: \"kubernetes.io/projected/77637c1f-26f5-4ea7-9a5c-af70030ca78c-kube-api-access-445nn\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.836899 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-db-sync-config-data\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.836938 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-combined-ca-bundle\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.836990 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-config-data\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.938641 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-445nn\" (UniqueName: \"kubernetes.io/projected/77637c1f-26f5-4ea7-9a5c-af70030ca78c-kube-api-access-445nn\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.938944 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-db-sync-config-data\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.938988 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-combined-ca-bundle\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.939042 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-config-data\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.944857 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-combined-ca-bundle\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.945714 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-config-data\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.946027 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-db-sync-config-data\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:56 crc kubenswrapper[5065]: I1008 13:35:56.957686 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-445nn\" (UniqueName: \"kubernetes.io/projected/77637c1f-26f5-4ea7-9a5c-af70030ca78c-kube-api-access-445nn\") pod \"glance-db-sync-dtvgx\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.077493 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dtvgx" Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.223449 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb78-account-create-zw8ns"] Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.603679 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dtvgx"] Oct 08 13:35:57 crc kubenswrapper[5065]: W1008 13:35:57.611468 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77637c1f_26f5_4ea7_9a5c_af70030ca78c.slice/crio-520bf8c178996f3887b721bf2a333ab8057ea2aa8d3357d44fb8c78181cf8a7b WatchSource:0}: Error finding container 520bf8c178996f3887b721bf2a333ab8057ea2aa8d3357d44fb8c78181cf8a7b: Status 404 returned error can't find the container with id 520bf8c178996f3887b721bf2a333ab8057ea2aa8d3357d44fb8c78181cf8a7b Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.686258 5065 generic.go:334] "Generic (PLEG): container finished" podID="5a00958a-1aab-44b8-9e6b-13a09ca60d99" containerID="efa34a603bc3b0d53f772ed59ce1e71ad2f2f38f3afc75ef7c8ed98efcb40e44" exitCode=0 Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.686309 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f23-account-create-qdwq2" event={"ID":"5a00958a-1aab-44b8-9e6b-13a09ca60d99","Type":"ContainerDied","Data":"efa34a603bc3b0d53f772ed59ce1e71ad2f2f38f3afc75ef7c8ed98efcb40e44"} Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.686344 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f23-account-create-qdwq2" event={"ID":"5a00958a-1aab-44b8-9e6b-13a09ca60d99","Type":"ContainerStarted","Data":"bc57d8ea5472e080d16b6d40deae1a4eb26baa452abb97246e4f9e54bd9beb23"} Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.687199 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb78-account-create-zw8ns" event={"ID":"a1486a5d-5fb5-4055-b5e2-a9eceb919f29","Type":"ContainerStarted","Data":"b21b71965a6806417169644dfc5f5265987ff43266210ef804384ad7bcc9a47e"} Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.689154 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dtvgx" event={"ID":"77637c1f-26f5-4ea7-9a5c-af70030ca78c","Type":"ContainerStarted","Data":"520bf8c178996f3887b721bf2a333ab8057ea2aa8d3357d44fb8c78181cf8a7b"} Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.692834 5065 generic.go:334] "Generic (PLEG): container finished" podID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerID="264b1ee903df6ce1a97e07b64d812c86782e3a58f2f09c609b3c81d9d02ee22a" exitCode=0 Oct 08 13:35:57 crc kubenswrapper[5065]: I1008 13:35:57.692880 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae3d89be-0a42-4a3d-914c-3bff67bd37b4","Type":"ContainerDied","Data":"264b1ee903df6ce1a97e07b64d812c86782e3a58f2f09c609b3c81d9d02ee22a"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.704641 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.706311 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.706447 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.706568 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.706646 5065 generic.go:334] "Generic (PLEG): container finished" podID="a1486a5d-5fb5-4055-b5e2-a9eceb919f29" containerID="240b3c89b50ae321407c1cf6aa488343699d4a7a372717dfbfb42f1a27654d70" exitCode=0 Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.706669 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb78-account-create-zw8ns" event={"ID":"a1486a5d-5fb5-4055-b5e2-a9eceb919f29","Type":"ContainerDied","Data":"240b3c89b50ae321407c1cf6aa488343699d4a7a372717dfbfb42f1a27654d70"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.710867 5065 generic.go:334] "Generic (PLEG): container finished" podID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerID="8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009" exitCode=0 Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.710923 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a416f725-cd7c-4bd8-9123-28cad18157d9","Type":"ContainerDied","Data":"8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.721089 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae3d89be-0a42-4a3d-914c-3bff67bd37b4","Type":"ContainerStarted","Data":"c184ff5407110302a6125a5a613f8a91d5febe7a6d10d230bf471f3d0f46b2f4"} Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.722473 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.778801 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.998670249 podStartE2EDuration="57.778775771s" podCreationTimestamp="2025-10-08 13:35:01 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.604210048 +0000 UTC m=+1017.381591805" lastFinishedPulling="2025-10-08 13:35:23.38431557 +0000 UTC m=+1025.161697327" observedRunningTime="2025-10-08 13:35:58.772642277 +0000 UTC m=+1060.550024064" watchObservedRunningTime="2025-10-08 13:35:58.778775771 +0000 UTC m=+1060.556157528" Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.898741 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xnw9m" podUID="4749b7e4-3896-474d-84b3-8ddf351a24ac" containerName="ovn-controller" probeResult="failure" output=< Oct 08 13:35:58 crc kubenswrapper[5065]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 08 13:35:58 crc kubenswrapper[5065]: > Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.949679 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:58 crc kubenswrapper[5065]: I1008 13:35:58.953064 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.182926 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xnw9m-config-jfkph"] Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.185937 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.189215 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.193150 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xnw9m-config-jfkph"] Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.201348 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f23-account-create-qdwq2" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.323016 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlzn7\" (UniqueName: \"kubernetes.io/projected/5a00958a-1aab-44b8-9e6b-13a09ca60d99-kube-api-access-wlzn7\") pod \"5a00958a-1aab-44b8-9e6b-13a09ca60d99\" (UID: \"5a00958a-1aab-44b8-9e6b-13a09ca60d99\") " Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.323262 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8h6d\" (UniqueName: \"kubernetes.io/projected/9621cfec-f558-44af-bc4d-6f59c6a0398e-kube-api-access-d8h6d\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.323324 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.323350 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run-ovn\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.323371 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-log-ovn\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.323430 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-scripts\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.323501 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-additional-scripts\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.327692 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a00958a-1aab-44b8-9e6b-13a09ca60d99-kube-api-access-wlzn7" (OuterVolumeSpecName: "kube-api-access-wlzn7") pod "5a00958a-1aab-44b8-9e6b-13a09ca60d99" (UID: "5a00958a-1aab-44b8-9e6b-13a09ca60d99"). InnerVolumeSpecName "kube-api-access-wlzn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425299 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425350 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run-ovn\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425390 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-log-ovn\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425461 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-scripts\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425703 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run-ovn\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425703 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425819 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-log-ovn\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425853 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-additional-scripts\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.425934 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8h6d\" (UniqueName: \"kubernetes.io/projected/9621cfec-f558-44af-bc4d-6f59c6a0398e-kube-api-access-d8h6d\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.426005 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlzn7\" (UniqueName: \"kubernetes.io/projected/5a00958a-1aab-44b8-9e6b-13a09ca60d99-kube-api-access-wlzn7\") on node \"crc\" DevicePath \"\"" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.426690 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-additional-scripts\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.428132 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-scripts\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.444100 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8h6d\" (UniqueName: \"kubernetes.io/projected/9621cfec-f558-44af-bc4d-6f59c6a0398e-kube-api-access-d8h6d\") pod \"ovn-controller-xnw9m-config-jfkph\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.527480 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.733533 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4f23-account-create-qdwq2" event={"ID":"5a00958a-1aab-44b8-9e6b-13a09ca60d99","Type":"ContainerDied","Data":"bc57d8ea5472e080d16b6d40deae1a4eb26baa452abb97246e4f9e54bd9beb23"} Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.733572 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57d8ea5472e080d16b6d40deae1a4eb26baa452abb97246e4f9e54bd9beb23" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.733625 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4f23-account-create-qdwq2" Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.741071 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a416f725-cd7c-4bd8-9123-28cad18157d9","Type":"ContainerStarted","Data":"7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a"} Oct 08 13:35:59 crc kubenswrapper[5065]: I1008 13:35:59.773436 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.906628011 podStartE2EDuration="57.773406351s" podCreationTimestamp="2025-10-08 13:35:02 +0000 UTC" firstStartedPulling="2025-10-08 13:35:15.648867496 +0000 UTC m=+1017.426249253" lastFinishedPulling="2025-10-08 13:35:24.515645836 +0000 UTC m=+1026.293027593" observedRunningTime="2025-10-08 13:35:59.770154325 +0000 UTC m=+1061.547536082" watchObservedRunningTime="2025-10-08 13:35:59.773406351 +0000 UTC m=+1061.550788108" Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.087787 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb78-account-create-zw8ns" Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.142387 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkgpw\" (UniqueName: \"kubernetes.io/projected/a1486a5d-5fb5-4055-b5e2-a9eceb919f29-kube-api-access-vkgpw\") pod \"a1486a5d-5fb5-4055-b5e2-a9eceb919f29\" (UID: \"a1486a5d-5fb5-4055-b5e2-a9eceb919f29\") " Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.148781 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1486a5d-5fb5-4055-b5e2-a9eceb919f29-kube-api-access-vkgpw" (OuterVolumeSpecName: "kube-api-access-vkgpw") pod "a1486a5d-5fb5-4055-b5e2-a9eceb919f29" (UID: "a1486a5d-5fb5-4055-b5e2-a9eceb919f29"). InnerVolumeSpecName "kube-api-access-vkgpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.244555 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkgpw\" (UniqueName: \"kubernetes.io/projected/a1486a5d-5fb5-4055-b5e2-a9eceb919f29-kube-api-access-vkgpw\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.265380 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xnw9m-config-jfkph"] Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.750506 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb78-account-create-zw8ns" Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.750518 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb78-account-create-zw8ns" event={"ID":"a1486a5d-5fb5-4055-b5e2-a9eceb919f29","Type":"ContainerDied","Data":"b21b71965a6806417169644dfc5f5265987ff43266210ef804384ad7bcc9a47e"} Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.751157 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21b71965a6806417169644dfc5f5265987ff43266210ef804384ad7bcc9a47e" Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.754391 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xnw9m-config-jfkph" event={"ID":"9621cfec-f558-44af-bc4d-6f59c6a0398e","Type":"ContainerStarted","Data":"f67790086ee970d905212975fc43ca99ba502b657df1fa6b0d332e8df0913c96"} Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.754600 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xnw9m-config-jfkph" event={"ID":"9621cfec-f558-44af-bc4d-6f59c6a0398e","Type":"ContainerStarted","Data":"240ed8dfbb172fa1695f945c37aba4695081fe65e6ef45bda88371478785ea6c"} Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.761953 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514"} Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.762002 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732"} Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.762012 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432"} Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.762020 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834"} Oct 08 13:36:00 crc kubenswrapper[5065]: I1008 13:36:00.775269 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xnw9m-config-jfkph" podStartSLOduration=1.7752528440000002 podStartE2EDuration="1.775252844s" podCreationTimestamp="2025-10-08 13:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:00.772954133 +0000 UTC m=+1062.550335890" watchObservedRunningTime="2025-10-08 13:36:00.775252844 +0000 UTC m=+1062.552634611" Oct 08 13:36:01 crc kubenswrapper[5065]: I1008 13:36:01.770964 5065 generic.go:334] "Generic (PLEG): container finished" podID="9621cfec-f558-44af-bc4d-6f59c6a0398e" containerID="f67790086ee970d905212975fc43ca99ba502b657df1fa6b0d332e8df0913c96" exitCode=0 Oct 08 13:36:01 crc kubenswrapper[5065]: I1008 13:36:01.771208 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xnw9m-config-jfkph" event={"ID":"9621cfec-f558-44af-bc4d-6f59c6a0398e","Type":"ContainerDied","Data":"f67790086ee970d905212975fc43ca99ba502b657df1fa6b0d332e8df0913c96"} Oct 08 13:36:02 crc kubenswrapper[5065]: I1008 13:36:02.786343 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872"} Oct 08 13:36:02 crc kubenswrapper[5065]: I1008 13:36:02.786728 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d"} Oct 08 13:36:02 crc kubenswrapper[5065]: I1008 13:36:02.786746 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b"} Oct 08 13:36:02 crc kubenswrapper[5065]: I1008 13:36:02.786759 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c"} Oct 08 13:36:02 crc kubenswrapper[5065]: I1008 13:36:02.786772 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586"} Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.120525 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.194250 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-scripts\") pod \"9621cfec-f558-44af-bc4d-6f59c6a0398e\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.194320 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run-ovn\") pod \"9621cfec-f558-44af-bc4d-6f59c6a0398e\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.194344 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-log-ovn\") pod \"9621cfec-f558-44af-bc4d-6f59c6a0398e\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.194429 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run\") pod \"9621cfec-f558-44af-bc4d-6f59c6a0398e\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.194473 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-additional-scripts\") pod \"9621cfec-f558-44af-bc4d-6f59c6a0398e\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.194512 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8h6d\" (UniqueName: \"kubernetes.io/projected/9621cfec-f558-44af-bc4d-6f59c6a0398e-kube-api-access-d8h6d\") pod \"9621cfec-f558-44af-bc4d-6f59c6a0398e\" (UID: \"9621cfec-f558-44af-bc4d-6f59c6a0398e\") " Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.196058 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9621cfec-f558-44af-bc4d-6f59c6a0398e" (UID: "9621cfec-f558-44af-bc4d-6f59c6a0398e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.196120 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9621cfec-f558-44af-bc4d-6f59c6a0398e" (UID: "9621cfec-f558-44af-bc4d-6f59c6a0398e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.196150 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run" (OuterVolumeSpecName: "var-run") pod "9621cfec-f558-44af-bc4d-6f59c6a0398e" (UID: "9621cfec-f558-44af-bc4d-6f59c6a0398e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.196801 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9621cfec-f558-44af-bc4d-6f59c6a0398e" (UID: "9621cfec-f558-44af-bc4d-6f59c6a0398e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.197053 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-scripts" (OuterVolumeSpecName: "scripts") pod "9621cfec-f558-44af-bc4d-6f59c6a0398e" (UID: "9621cfec-f558-44af-bc4d-6f59c6a0398e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.206299 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9621cfec-f558-44af-bc4d-6f59c6a0398e-kube-api-access-d8h6d" (OuterVolumeSpecName: "kube-api-access-d8h6d") pod "9621cfec-f558-44af-bc4d-6f59c6a0398e" (UID: "9621cfec-f558-44af-bc4d-6f59c6a0398e"). InnerVolumeSpecName "kube-api-access-d8h6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.296227 5065 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.296265 5065 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.296277 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8h6d\" (UniqueName: \"kubernetes.io/projected/9621cfec-f558-44af-bc4d-6f59c6a0398e-kube-api-access-d8h6d\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.296287 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9621cfec-f558-44af-bc4d-6f59c6a0398e-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.296295 5065 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.296304 5065 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9621cfec-f558-44af-bc4d-6f59c6a0398e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.384735 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xnw9m-config-jfkph"] Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.389936 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xnw9m-config-jfkph"] Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.596114 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.798243 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240ed8dfbb172fa1695f945c37aba4695081fe65e6ef45bda88371478785ea6c" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.798246 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m-config-jfkph" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.810127 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743"} Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.810176 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerStarted","Data":"361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95"} Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.845074 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.211810157 podStartE2EDuration="25.845052003s" podCreationTimestamp="2025-10-08 13:35:38 +0000 UTC" firstStartedPulling="2025-10-08 13:35:56.091112506 +0000 UTC m=+1057.868494263" lastFinishedPulling="2025-10-08 13:36:01.724354342 +0000 UTC m=+1063.501736109" observedRunningTime="2025-10-08 13:36:03.842716791 +0000 UTC m=+1065.620098568" watchObservedRunningTime="2025-10-08 13:36:03.845052003 +0000 UTC m=+1065.622433760" Oct 08 13:36:03 crc kubenswrapper[5065]: I1008 13:36:03.870610 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xnw9m" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.139458 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-qmlbs"] Oct 08 13:36:04 crc kubenswrapper[5065]: E1008 13:36:04.140181 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1486a5d-5fb5-4055-b5e2-a9eceb919f29" containerName="mariadb-account-create" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.140200 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1486a5d-5fb5-4055-b5e2-a9eceb919f29" containerName="mariadb-account-create" Oct 08 13:36:04 crc kubenswrapper[5065]: E1008 13:36:04.140220 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a00958a-1aab-44b8-9e6b-13a09ca60d99" containerName="mariadb-account-create" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.140227 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a00958a-1aab-44b8-9e6b-13a09ca60d99" containerName="mariadb-account-create" Oct 08 13:36:04 crc kubenswrapper[5065]: E1008 13:36:04.140239 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9621cfec-f558-44af-bc4d-6f59c6a0398e" containerName="ovn-config" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.140245 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9621cfec-f558-44af-bc4d-6f59c6a0398e" containerName="ovn-config" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.140432 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9621cfec-f558-44af-bc4d-6f59c6a0398e" containerName="ovn-config" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.140464 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1486a5d-5fb5-4055-b5e2-a9eceb919f29" containerName="mariadb-account-create" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.140478 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a00958a-1aab-44b8-9e6b-13a09ca60d99" containerName="mariadb-account-create" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.141476 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.144152 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.158966 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-qmlbs"] Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.211398 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-config\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.211580 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.211629 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.211762 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.211854 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-svc\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.211937 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vb8\" (UniqueName: \"kubernetes.io/projected/739c4d4d-fa0c-49c1-b435-606f3eb19f49-kube-api-access-b7vb8\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.313277 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.313343 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.313391 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.313442 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-svc\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.313474 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7vb8\" (UniqueName: \"kubernetes.io/projected/739c4d4d-fa0c-49c1-b435-606f3eb19f49-kube-api-access-b7vb8\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.313514 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-config\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.314337 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.314697 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.314831 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-svc\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.314910 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.315368 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-config\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.335237 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7vb8\" (UniqueName: \"kubernetes.io/projected/739c4d4d-fa0c-49c1-b435-606f3eb19f49-kube-api-access-b7vb8\") pod \"dnsmasq-dns-564965cbfc-qmlbs\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.463568 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.886567 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9621cfec-f558-44af-bc4d-6f59c6a0398e" path="/var/lib/kubelet/pods/9621cfec-f558-44af-bc4d-6f59c6a0398e/volumes" Oct 08 13:36:04 crc kubenswrapper[5065]: I1008 13:36:04.976792 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-qmlbs"] Oct 08 13:36:11 crc kubenswrapper[5065]: W1008 13:36:11.243179 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739c4d4d_fa0c_49c1_b435_606f3eb19f49.slice/crio-7fcd42507dc5932ca138c0ecfef6e0118948e5e8fce673eae7467f925d263eb3 WatchSource:0}: Error finding container 7fcd42507dc5932ca138c0ecfef6e0118948e5e8fce673eae7467f925d263eb3: Status 404 returned error can't find the container with id 7fcd42507dc5932ca138c0ecfef6e0118948e5e8fce673eae7467f925d263eb3 Oct 08 13:36:11 crc kubenswrapper[5065]: I1008 13:36:11.882054 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dtvgx" event={"ID":"77637c1f-26f5-4ea7-9a5c-af70030ca78c","Type":"ContainerStarted","Data":"5b2cf90c955e396a351727202956af40d4d62861367cf9be625f0e7e9590bf18"} Oct 08 13:36:11 crc kubenswrapper[5065]: I1008 13:36:11.883583 5065 generic.go:334] "Generic (PLEG): container finished" podID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerID="c59de23f50a83f04333b2fc7ff02b7e119d4785656ce4f2dc74280bd36131a45" exitCode=0 Oct 08 13:36:11 crc kubenswrapper[5065]: I1008 13:36:11.883622 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" event={"ID":"739c4d4d-fa0c-49c1-b435-606f3eb19f49","Type":"ContainerDied","Data":"c59de23f50a83f04333b2fc7ff02b7e119d4785656ce4f2dc74280bd36131a45"} Oct 08 13:36:11 crc kubenswrapper[5065]: I1008 13:36:11.883647 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" event={"ID":"739c4d4d-fa0c-49c1-b435-606f3eb19f49","Type":"ContainerStarted","Data":"7fcd42507dc5932ca138c0ecfef6e0118948e5e8fce673eae7467f925d263eb3"} Oct 08 13:36:11 crc kubenswrapper[5065]: I1008 13:36:11.900457 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dtvgx" podStartSLOduration=2.146336884 podStartE2EDuration="15.900393296s" podCreationTimestamp="2025-10-08 13:35:56 +0000 UTC" firstStartedPulling="2025-10-08 13:35:57.644559844 +0000 UTC m=+1059.421941601" lastFinishedPulling="2025-10-08 13:36:11.398616256 +0000 UTC m=+1073.175998013" observedRunningTime="2025-10-08 13:36:11.898810623 +0000 UTC m=+1073.676192380" watchObservedRunningTime="2025-10-08 13:36:11.900393296 +0000 UTC m=+1073.677775053" Oct 08 13:36:12 crc kubenswrapper[5065]: I1008 13:36:12.893701 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" event={"ID":"739c4d4d-fa0c-49c1-b435-606f3eb19f49","Type":"ContainerStarted","Data":"6995322149d4c44f0e2438c7519c2d4be6214bb33c70ab5eb2f51b08a0de9638"} Oct 08 13:36:12 crc kubenswrapper[5065]: I1008 13:36:12.894083 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:12 crc kubenswrapper[5065]: I1008 13:36:12.921534 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" podStartSLOduration=8.921512721 podStartE2EDuration="8.921512721s" podCreationTimestamp="2025-10-08 13:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:12.915825389 +0000 UTC m=+1074.693207156" watchObservedRunningTime="2025-10-08 13:36:12.921512721 +0000 UTC m=+1074.698894488" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.355635 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.598634 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.617707 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tw6n9"] Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.619003 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tw6n9" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.652516 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tw6n9"] Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.700096 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/8b139de0-decf-49d9-8937-87abc053ee7d-kube-api-access-f69sj\") pod \"barbican-db-create-tw6n9\" (UID: \"8b139de0-decf-49d9-8937-87abc053ee7d\") " pod="openstack/barbican-db-create-tw6n9" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.730025 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xv8c6"] Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.731929 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xv8c6" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.739405 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xv8c6"] Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.802564 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/8b139de0-decf-49d9-8937-87abc053ee7d-kube-api-access-f69sj\") pod \"barbican-db-create-tw6n9\" (UID: \"8b139de0-decf-49d9-8937-87abc053ee7d\") " pod="openstack/barbican-db-create-tw6n9" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.815189 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6nrgv"] Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.817754 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6nrgv" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.843264 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6nrgv"] Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.852244 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/8b139de0-decf-49d9-8937-87abc053ee7d-kube-api-access-f69sj\") pod \"barbican-db-create-tw6n9\" (UID: \"8b139de0-decf-49d9-8937-87abc053ee7d\") " pod="openstack/barbican-db-create-tw6n9" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.904059 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vhj\" (UniqueName: \"kubernetes.io/projected/871946b3-7c0e-4329-a752-46b5a8d5792f-kube-api-access-q7vhj\") pod \"cinder-db-create-xv8c6\" (UID: \"871946b3-7c0e-4329-a752-46b5a8d5792f\") " pod="openstack/cinder-db-create-xv8c6" Oct 08 13:36:13 crc kubenswrapper[5065]: I1008 13:36:13.939349 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tw6n9" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.005466 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999qd\" (UniqueName: \"kubernetes.io/projected/915a9641-6839-457e-9309-ef7b5b83bcfa-kube-api-access-999qd\") pod \"neutron-db-create-6nrgv\" (UID: \"915a9641-6839-457e-9309-ef7b5b83bcfa\") " pod="openstack/neutron-db-create-6nrgv" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.005633 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vhj\" (UniqueName: \"kubernetes.io/projected/871946b3-7c0e-4329-a752-46b5a8d5792f-kube-api-access-q7vhj\") pod \"cinder-db-create-xv8c6\" (UID: \"871946b3-7c0e-4329-a752-46b5a8d5792f\") " pod="openstack/cinder-db-create-xv8c6" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.025235 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-76dpp"] Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.026402 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.027652 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vhj\" (UniqueName: \"kubernetes.io/projected/871946b3-7c0e-4329-a752-46b5a8d5792f-kube-api-access-q7vhj\") pod \"cinder-db-create-xv8c6\" (UID: \"871946b3-7c0e-4329-a752-46b5a8d5792f\") " pod="openstack/cinder-db-create-xv8c6" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.030171 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.030450 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j9fw4" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.030583 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.030712 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.046406 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-76dpp"] Oct 08 13:36:14 crc kubenswrapper[5065]: I1008 13:36:14.050136 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xv8c6" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.109390 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999qd\" (UniqueName: \"kubernetes.io/projected/915a9641-6839-457e-9309-ef7b5b83bcfa-kube-api-access-999qd\") pod \"neutron-db-create-6nrgv\" (UID: \"915a9641-6839-457e-9309-ef7b5b83bcfa\") " pod="openstack/neutron-db-create-6nrgv" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.134102 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-999qd\" (UniqueName: \"kubernetes.io/projected/915a9641-6839-457e-9309-ef7b5b83bcfa-kube-api-access-999qd\") pod \"neutron-db-create-6nrgv\" (UID: \"915a9641-6839-457e-9309-ef7b5b83bcfa\") " pod="openstack/neutron-db-create-6nrgv" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.151906 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6nrgv" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.210875 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-config-data\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.210906 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-combined-ca-bundle\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.210986 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rcq2\" (UniqueName: \"kubernetes.io/projected/0e2de891-07e9-44cf-aa13-593ecc5f571a-kube-api-access-6rcq2\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.312506 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-config-data\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.312541 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-combined-ca-bundle\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.312614 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rcq2\" (UniqueName: \"kubernetes.io/projected/0e2de891-07e9-44cf-aa13-593ecc5f571a-kube-api-access-6rcq2\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.317801 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-config-data\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.318640 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-combined-ca-bundle\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.335144 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rcq2\" (UniqueName: \"kubernetes.io/projected/0e2de891-07e9-44cf-aa13-593ecc5f571a-kube-api-access-6rcq2\") pod \"keystone-db-sync-76dpp\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:14.437719 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.311832 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tw6n9"] Oct 08 13:36:15 crc kubenswrapper[5065]: W1008 13:36:15.313620 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b139de0_decf_49d9_8937_87abc053ee7d.slice/crio-2d16aa1d426c4cac04b8d8b0d6518ff2d6fe4b8e278cf13dcdfdcee00152b3d9 WatchSource:0}: Error finding container 2d16aa1d426c4cac04b8d8b0d6518ff2d6fe4b8e278cf13dcdfdcee00152b3d9: Status 404 returned error can't find the container with id 2d16aa1d426c4cac04b8d8b0d6518ff2d6fe4b8e278cf13dcdfdcee00152b3d9 Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.386619 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-76dpp"] Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.408163 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6nrgv"] Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.426213 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xv8c6"] Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.923039 5065 generic.go:334] "Generic (PLEG): container finished" podID="8b139de0-decf-49d9-8937-87abc053ee7d" containerID="767b5650176872a830e047466d129d41f486e6bcabd932592c416ec0038e7814" exitCode=0 Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.923724 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tw6n9" event={"ID":"8b139de0-decf-49d9-8937-87abc053ee7d","Type":"ContainerDied","Data":"767b5650176872a830e047466d129d41f486e6bcabd932592c416ec0038e7814"} Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.923759 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tw6n9" event={"ID":"8b139de0-decf-49d9-8937-87abc053ee7d","Type":"ContainerStarted","Data":"2d16aa1d426c4cac04b8d8b0d6518ff2d6fe4b8e278cf13dcdfdcee00152b3d9"} Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.926296 5065 generic.go:334] "Generic (PLEG): container finished" podID="871946b3-7c0e-4329-a752-46b5a8d5792f" containerID="7fc9b44c4c76329a137e71e77cf8552d2747dd91200b1b2a25ecec548a4f3af5" exitCode=0 Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.926363 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xv8c6" event={"ID":"871946b3-7c0e-4329-a752-46b5a8d5792f","Type":"ContainerDied","Data":"7fc9b44c4c76329a137e71e77cf8552d2747dd91200b1b2a25ecec548a4f3af5"} Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.926388 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xv8c6" event={"ID":"871946b3-7c0e-4329-a752-46b5a8d5792f","Type":"ContainerStarted","Data":"f940d94b240fe9825bd358a593d2bb07636f9f9b22c7c2f254f7108c9fb9794f"} Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.933197 5065 generic.go:334] "Generic (PLEG): container finished" podID="915a9641-6839-457e-9309-ef7b5b83bcfa" containerID="a76703a08c83f16e155e4a2515fe2a55f05d42fe92c2bc2d586cf48ce2f3aca2" exitCode=0 Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.933272 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6nrgv" event={"ID":"915a9641-6839-457e-9309-ef7b5b83bcfa","Type":"ContainerDied","Data":"a76703a08c83f16e155e4a2515fe2a55f05d42fe92c2bc2d586cf48ce2f3aca2"} Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.933301 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6nrgv" event={"ID":"915a9641-6839-457e-9309-ef7b5b83bcfa","Type":"ContainerStarted","Data":"67e264e95ad34a82057270ce3c7f68f317a03cc13b6d9cc9b9d65e0fe7187112"} Oct 08 13:36:15 crc kubenswrapper[5065]: I1008 13:36:15.942593 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76dpp" event={"ID":"0e2de891-07e9-44cf-aa13-593ecc5f571a","Type":"ContainerStarted","Data":"f260b7a8cb13103d68e2c3897f5e0b2f41073498f9f8edf49daa5030a7b5215f"} Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.393393 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tw6n9" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.401359 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6nrgv" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.409693 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xv8c6" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.581441 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-999qd\" (UniqueName: \"kubernetes.io/projected/915a9641-6839-457e-9309-ef7b5b83bcfa-kube-api-access-999qd\") pod \"915a9641-6839-457e-9309-ef7b5b83bcfa\" (UID: \"915a9641-6839-457e-9309-ef7b5b83bcfa\") " Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.581533 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vhj\" (UniqueName: \"kubernetes.io/projected/871946b3-7c0e-4329-a752-46b5a8d5792f-kube-api-access-q7vhj\") pod \"871946b3-7c0e-4329-a752-46b5a8d5792f\" (UID: \"871946b3-7c0e-4329-a752-46b5a8d5792f\") " Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.581755 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/8b139de0-decf-49d9-8937-87abc053ee7d-kube-api-access-f69sj\") pod \"8b139de0-decf-49d9-8937-87abc053ee7d\" (UID: \"8b139de0-decf-49d9-8937-87abc053ee7d\") " Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.588139 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b139de0-decf-49d9-8937-87abc053ee7d-kube-api-access-f69sj" (OuterVolumeSpecName: "kube-api-access-f69sj") pod "8b139de0-decf-49d9-8937-87abc053ee7d" (UID: "8b139de0-decf-49d9-8937-87abc053ee7d"). InnerVolumeSpecName "kube-api-access-f69sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.588296 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871946b3-7c0e-4329-a752-46b5a8d5792f-kube-api-access-q7vhj" (OuterVolumeSpecName: "kube-api-access-q7vhj") pod "871946b3-7c0e-4329-a752-46b5a8d5792f" (UID: "871946b3-7c0e-4329-a752-46b5a8d5792f"). InnerVolumeSpecName "kube-api-access-q7vhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.588580 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915a9641-6839-457e-9309-ef7b5b83bcfa-kube-api-access-999qd" (OuterVolumeSpecName: "kube-api-access-999qd") pod "915a9641-6839-457e-9309-ef7b5b83bcfa" (UID: "915a9641-6839-457e-9309-ef7b5b83bcfa"). InnerVolumeSpecName "kube-api-access-999qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.683826 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/8b139de0-decf-49d9-8937-87abc053ee7d-kube-api-access-f69sj\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.683912 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-999qd\" (UniqueName: \"kubernetes.io/projected/915a9641-6839-457e-9309-ef7b5b83bcfa-kube-api-access-999qd\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.683933 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vhj\" (UniqueName: \"kubernetes.io/projected/871946b3-7c0e-4329-a752-46b5a8d5792f-kube-api-access-q7vhj\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.961738 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tw6n9" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.961881 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tw6n9" event={"ID":"8b139de0-decf-49d9-8937-87abc053ee7d","Type":"ContainerDied","Data":"2d16aa1d426c4cac04b8d8b0d6518ff2d6fe4b8e278cf13dcdfdcee00152b3d9"} Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.962989 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d16aa1d426c4cac04b8d8b0d6518ff2d6fe4b8e278cf13dcdfdcee00152b3d9" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.965578 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xv8c6" event={"ID":"871946b3-7c0e-4329-a752-46b5a8d5792f","Type":"ContainerDied","Data":"f940d94b240fe9825bd358a593d2bb07636f9f9b22c7c2f254f7108c9fb9794f"} Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.965613 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f940d94b240fe9825bd358a593d2bb07636f9f9b22c7c2f254f7108c9fb9794f" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.965671 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xv8c6" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.968831 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6nrgv" event={"ID":"915a9641-6839-457e-9309-ef7b5b83bcfa","Type":"ContainerDied","Data":"67e264e95ad34a82057270ce3c7f68f317a03cc13b6d9cc9b9d65e0fe7187112"} Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.968858 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67e264e95ad34a82057270ce3c7f68f317a03cc13b6d9cc9b9d65e0fe7187112" Oct 08 13:36:17 crc kubenswrapper[5065]: I1008 13:36:17.968908 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6nrgv" Oct 08 13:36:19 crc kubenswrapper[5065]: I1008 13:36:19.466278 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:19 crc kubenswrapper[5065]: I1008 13:36:19.513996 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-tfb6k"] Oct 08 13:36:19 crc kubenswrapper[5065]: I1008 13:36:19.514469 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" podUID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerName="dnsmasq-dns" containerID="cri-o://864c4bcee9aca12d289071331bb7d75eccb4e05b423685c527460123db301ef1" gracePeriod=10 Oct 08 13:36:19 crc kubenswrapper[5065]: I1008 13:36:19.986026 5065 generic.go:334] "Generic (PLEG): container finished" podID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerID="864c4bcee9aca12d289071331bb7d75eccb4e05b423685c527460123db301ef1" exitCode=0 Oct 08 13:36:19 crc kubenswrapper[5065]: I1008 13:36:19.986069 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" event={"ID":"a2151f29-c70c-44f5-a08d-39f9238778d5","Type":"ContainerDied","Data":"864c4bcee9aca12d289071331bb7d75eccb4e05b423685c527460123db301ef1"} Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.196242 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.342234 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbprb\" (UniqueName: \"kubernetes.io/projected/a2151f29-c70c-44f5-a08d-39f9238778d5-kube-api-access-xbprb\") pod \"a2151f29-c70c-44f5-a08d-39f9238778d5\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.342351 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-dns-svc\") pod \"a2151f29-c70c-44f5-a08d-39f9238778d5\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.342372 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-sb\") pod \"a2151f29-c70c-44f5-a08d-39f9238778d5\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.342393 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-config\") pod \"a2151f29-c70c-44f5-a08d-39f9238778d5\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.342469 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-nb\") pod \"a2151f29-c70c-44f5-a08d-39f9238778d5\" (UID: \"a2151f29-c70c-44f5-a08d-39f9238778d5\") " Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.346860 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2151f29-c70c-44f5-a08d-39f9238778d5-kube-api-access-xbprb" (OuterVolumeSpecName: "kube-api-access-xbprb") pod "a2151f29-c70c-44f5-a08d-39f9238778d5" (UID: "a2151f29-c70c-44f5-a08d-39f9238778d5"). InnerVolumeSpecName "kube-api-access-xbprb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.378714 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2151f29-c70c-44f5-a08d-39f9238778d5" (UID: "a2151f29-c70c-44f5-a08d-39f9238778d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.380944 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2151f29-c70c-44f5-a08d-39f9238778d5" (UID: "a2151f29-c70c-44f5-a08d-39f9238778d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.383291 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-config" (OuterVolumeSpecName: "config") pod "a2151f29-c70c-44f5-a08d-39f9238778d5" (UID: "a2151f29-c70c-44f5-a08d-39f9238778d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.383396 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2151f29-c70c-44f5-a08d-39f9238778d5" (UID: "a2151f29-c70c-44f5-a08d-39f9238778d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.444353 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.444408 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.444436 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.444449 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2151f29-c70c-44f5-a08d-39f9238778d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:21 crc kubenswrapper[5065]: I1008 13:36:21.444461 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbprb\" (UniqueName: \"kubernetes.io/projected/a2151f29-c70c-44f5-a08d-39f9238778d5-kube-api-access-xbprb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.006211 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" event={"ID":"a2151f29-c70c-44f5-a08d-39f9238778d5","Type":"ContainerDied","Data":"c32c3afdaf2ca258d795b3f8730f4b232cd9ef38dbcffffa78398f2df3222c34"} Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.006245 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-tfb6k" Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.006262 5065 scope.go:117] "RemoveContainer" containerID="864c4bcee9aca12d289071331bb7d75eccb4e05b423685c527460123db301ef1" Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.007926 5065 generic.go:334] "Generic (PLEG): container finished" podID="77637c1f-26f5-4ea7-9a5c-af70030ca78c" containerID="5b2cf90c955e396a351727202956af40d4d62861367cf9be625f0e7e9590bf18" exitCode=0 Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.007998 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dtvgx" event={"ID":"77637c1f-26f5-4ea7-9a5c-af70030ca78c","Type":"ContainerDied","Data":"5b2cf90c955e396a351727202956af40d4d62861367cf9be625f0e7e9590bf18"} Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.012952 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76dpp" event={"ID":"0e2de891-07e9-44cf-aa13-593ecc5f571a","Type":"ContainerStarted","Data":"74a0260deb479ffd4631be8552dcd86ba1855d8f7296f9a609726174337599e4"} Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.033124 5065 scope.go:117] "RemoveContainer" containerID="577d4ddff74eba3740d2624e787b781d0b31eed04d1ae22f625975693ab7293f" Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.044493 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-tfb6k"] Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.051197 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-tfb6k"] Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.060032 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-76dpp" podStartSLOduration=2.485724994 podStartE2EDuration="8.06001334s" podCreationTimestamp="2025-10-08 13:36:14 +0000 UTC" firstStartedPulling="2025-10-08 13:36:15.388012518 +0000 UTC m=+1077.165394275" lastFinishedPulling="2025-10-08 13:36:20.962300864 +0000 UTC m=+1082.739682621" observedRunningTime="2025-10-08 13:36:22.054118813 +0000 UTC m=+1083.831500630" watchObservedRunningTime="2025-10-08 13:36:22.06001334 +0000 UTC m=+1083.837395097" Oct 08 13:36:22 crc kubenswrapper[5065]: I1008 13:36:22.885040 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2151f29-c70c-44f5-a08d-39f9238778d5" path="/var/lib/kubelet/pods/a2151f29-c70c-44f5-a08d-39f9238778d5/volumes" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.433568 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dtvgx" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.578095 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-combined-ca-bundle\") pod \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.578150 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-db-sync-config-data\") pod \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.578248 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-445nn\" (UniqueName: \"kubernetes.io/projected/77637c1f-26f5-4ea7-9a5c-af70030ca78c-kube-api-access-445nn\") pod \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.578327 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-config-data\") pod \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\" (UID: \"77637c1f-26f5-4ea7-9a5c-af70030ca78c\") " Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.583727 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "77637c1f-26f5-4ea7-9a5c-af70030ca78c" (UID: "77637c1f-26f5-4ea7-9a5c-af70030ca78c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.583978 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77637c1f-26f5-4ea7-9a5c-af70030ca78c-kube-api-access-445nn" (OuterVolumeSpecName: "kube-api-access-445nn") pod "77637c1f-26f5-4ea7-9a5c-af70030ca78c" (UID: "77637c1f-26f5-4ea7-9a5c-af70030ca78c"). InnerVolumeSpecName "kube-api-access-445nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.605160 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77637c1f-26f5-4ea7-9a5c-af70030ca78c" (UID: "77637c1f-26f5-4ea7-9a5c-af70030ca78c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.628563 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-config-data" (OuterVolumeSpecName: "config-data") pod "77637c1f-26f5-4ea7-9a5c-af70030ca78c" (UID: "77637c1f-26f5-4ea7-9a5c-af70030ca78c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.680777 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-445nn\" (UniqueName: \"kubernetes.io/projected/77637c1f-26f5-4ea7-9a5c-af70030ca78c-kube-api-access-445nn\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.680820 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.680830 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.680839 5065 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77637c1f-26f5-4ea7-9a5c-af70030ca78c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.735190 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e7e9-account-create-8545w"] Oct 08 13:36:23 crc kubenswrapper[5065]: E1008 13:36:23.735656 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77637c1f-26f5-4ea7-9a5c-af70030ca78c" containerName="glance-db-sync" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.735681 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="77637c1f-26f5-4ea7-9a5c-af70030ca78c" containerName="glance-db-sync" Oct 08 13:36:23 crc kubenswrapper[5065]: E1008 13:36:23.735702 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerName="dnsmasq-dns" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.735712 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerName="dnsmasq-dns" Oct 08 13:36:23 crc kubenswrapper[5065]: E1008 13:36:23.735724 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b139de0-decf-49d9-8937-87abc053ee7d" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.735731 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b139de0-decf-49d9-8937-87abc053ee7d" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: E1008 13:36:23.735799 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915a9641-6839-457e-9309-ef7b5b83bcfa" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.735807 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="915a9641-6839-457e-9309-ef7b5b83bcfa" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: E1008 13:36:23.735824 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerName="init" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.735832 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerName="init" Oct 08 13:36:23 crc kubenswrapper[5065]: E1008 13:36:23.735853 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871946b3-7c0e-4329-a752-46b5a8d5792f" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.735859 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="871946b3-7c0e-4329-a752-46b5a8d5792f" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.736006 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2151f29-c70c-44f5-a08d-39f9238778d5" containerName="dnsmasq-dns" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.736027 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="871946b3-7c0e-4329-a752-46b5a8d5792f" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.736037 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="915a9641-6839-457e-9309-ef7b5b83bcfa" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.736057 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="77637c1f-26f5-4ea7-9a5c-af70030ca78c" containerName="glance-db-sync" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.736068 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b139de0-decf-49d9-8937-87abc053ee7d" containerName="mariadb-database-create" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.736714 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e7e9-account-create-8545w" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.738777 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.749624 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e7e9-account-create-8545w"] Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.837399 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5412-account-create-2vrxz"] Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.838517 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5412-account-create-2vrxz" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.841048 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.845821 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5412-account-create-2vrxz"] Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.883915 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mbm\" (UniqueName: \"kubernetes.io/projected/a86a58b3-276e-4dc0-96ea-b9d75f6e48c4-kube-api-access-f2mbm\") pod \"cinder-e7e9-account-create-8545w\" (UID: \"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4\") " pod="openstack/cinder-e7e9-account-create-8545w" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.985392 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56hd\" (UniqueName: \"kubernetes.io/projected/7df6bcfb-1378-4a89-a8d4-fe66dd35f072-kube-api-access-n56hd\") pod \"barbican-5412-account-create-2vrxz\" (UID: \"7df6bcfb-1378-4a89-a8d4-fe66dd35f072\") " pod="openstack/barbican-5412-account-create-2vrxz" Oct 08 13:36:23 crc kubenswrapper[5065]: I1008 13:36:23.985784 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mbm\" (UniqueName: \"kubernetes.io/projected/a86a58b3-276e-4dc0-96ea-b9d75f6e48c4-kube-api-access-f2mbm\") pod \"cinder-e7e9-account-create-8545w\" (UID: \"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4\") " pod="openstack/cinder-e7e9-account-create-8545w" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.008695 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mbm\" (UniqueName: \"kubernetes.io/projected/a86a58b3-276e-4dc0-96ea-b9d75f6e48c4-kube-api-access-f2mbm\") pod \"cinder-e7e9-account-create-8545w\" (UID: \"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4\") " pod="openstack/cinder-e7e9-account-create-8545w" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.036991 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dtvgx" event={"ID":"77637c1f-26f5-4ea7-9a5c-af70030ca78c","Type":"ContainerDied","Data":"520bf8c178996f3887b721bf2a333ab8057ea2aa8d3357d44fb8c78181cf8a7b"} Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.037039 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520bf8c178996f3887b721bf2a333ab8057ea2aa8d3357d44fb8c78181cf8a7b" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.037094 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dtvgx" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.045974 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ca6e-account-create-r89cv"] Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.047781 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca6e-account-create-r89cv" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.054155 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e7e9-account-create-8545w" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.056186 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.059716 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ca6e-account-create-r89cv"] Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.087378 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56hd\" (UniqueName: \"kubernetes.io/projected/7df6bcfb-1378-4a89-a8d4-fe66dd35f072-kube-api-access-n56hd\") pod \"barbican-5412-account-create-2vrxz\" (UID: \"7df6bcfb-1378-4a89-a8d4-fe66dd35f072\") " pod="openstack/barbican-5412-account-create-2vrxz" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.108125 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56hd\" (UniqueName: \"kubernetes.io/projected/7df6bcfb-1378-4a89-a8d4-fe66dd35f072-kube-api-access-n56hd\") pod \"barbican-5412-account-create-2vrxz\" (UID: \"7df6bcfb-1378-4a89-a8d4-fe66dd35f072\") " pod="openstack/barbican-5412-account-create-2vrxz" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.166969 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5412-account-create-2vrxz" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.190501 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7h28\" (UniqueName: \"kubernetes.io/projected/b21ff978-d4cd-4a8f-a2b5-85990d9a3517-kube-api-access-c7h28\") pod \"neutron-ca6e-account-create-r89cv\" (UID: \"b21ff978-d4cd-4a8f-a2b5-85990d9a3517\") " pod="openstack/neutron-ca6e-account-create-r89cv" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.293456 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7h28\" (UniqueName: \"kubernetes.io/projected/b21ff978-d4cd-4a8f-a2b5-85990d9a3517-kube-api-access-c7h28\") pod \"neutron-ca6e-account-create-r89cv\" (UID: \"b21ff978-d4cd-4a8f-a2b5-85990d9a3517\") " pod="openstack/neutron-ca6e-account-create-r89cv" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.321766 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7h28\" (UniqueName: \"kubernetes.io/projected/b21ff978-d4cd-4a8f-a2b5-85990d9a3517-kube-api-access-c7h28\") pod \"neutron-ca6e-account-create-r89cv\" (UID: \"b21ff978-d4cd-4a8f-a2b5-85990d9a3517\") " pod="openstack/neutron-ca6e-account-create-r89cv" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.362976 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795846498c-xgcvb"] Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.368935 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.378063 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.378124 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.382834 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795846498c-xgcvb"] Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.397116 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca6e-account-create-r89cv" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.502800 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmqk\" (UniqueName: \"kubernetes.io/projected/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-kube-api-access-8bmqk\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.502958 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-config\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.503305 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-swift-storage-0\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.503396 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-svc\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.503549 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-nb\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.503795 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-sb\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.584751 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e7e9-account-create-8545w"] Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.608151 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-nb\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.608225 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-sb\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.608252 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmqk\" (UniqueName: \"kubernetes.io/projected/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-kube-api-access-8bmqk\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.608291 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-config\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.608338 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-swift-storage-0\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.608361 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-svc\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.609286 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-svc\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.610007 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-config\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.610030 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-swift-storage-0\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.610801 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-sb\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.611556 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-nb\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.638640 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmqk\" (UniqueName: \"kubernetes.io/projected/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-kube-api-access-8bmqk\") pod \"dnsmasq-dns-795846498c-xgcvb\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.720218 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.745081 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5412-account-create-2vrxz"] Oct 08 13:36:24 crc kubenswrapper[5065]: W1008 13:36:24.754782 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7df6bcfb_1378_4a89_a8d4_fe66dd35f072.slice/crio-d1a32ab52fae3e82517704bd2e7b3639b786900be8b9cbf032db15f93b814978 WatchSource:0}: Error finding container d1a32ab52fae3e82517704bd2e7b3639b786900be8b9cbf032db15f93b814978: Status 404 returned error can't find the container with id d1a32ab52fae3e82517704bd2e7b3639b786900be8b9cbf032db15f93b814978 Oct 08 13:36:24 crc kubenswrapper[5065]: I1008 13:36:24.906466 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ca6e-account-create-r89cv"] Oct 08 13:36:24 crc kubenswrapper[5065]: W1008 13:36:24.920170 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb21ff978_d4cd_4a8f_a2b5_85990d9a3517.slice/crio-aafaa3692038375179cdf4db4ef846fd65c61c87d975a97dbb14f8eaed77caef WatchSource:0}: Error finding container aafaa3692038375179cdf4db4ef846fd65c61c87d975a97dbb14f8eaed77caef: Status 404 returned error can't find the container with id aafaa3692038375179cdf4db4ef846fd65c61c87d975a97dbb14f8eaed77caef Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.047110 5065 generic.go:334] "Generic (PLEG): container finished" podID="0e2de891-07e9-44cf-aa13-593ecc5f571a" containerID="74a0260deb479ffd4631be8552dcd86ba1855d8f7296f9a609726174337599e4" exitCode=0 Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.047226 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76dpp" event={"ID":"0e2de891-07e9-44cf-aa13-593ecc5f571a","Type":"ContainerDied","Data":"74a0260deb479ffd4631be8552dcd86ba1855d8f7296f9a609726174337599e4"} Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.049792 5065 generic.go:334] "Generic (PLEG): container finished" podID="7df6bcfb-1378-4a89-a8d4-fe66dd35f072" containerID="5ba00a4ab123b3b923191d3fab315b3342b75bc684a655689917fabd7ba6ebed" exitCode=0 Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.049900 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5412-account-create-2vrxz" event={"ID":"7df6bcfb-1378-4a89-a8d4-fe66dd35f072","Type":"ContainerDied","Data":"5ba00a4ab123b3b923191d3fab315b3342b75bc684a655689917fabd7ba6ebed"} Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.049989 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5412-account-create-2vrxz" event={"ID":"7df6bcfb-1378-4a89-a8d4-fe66dd35f072","Type":"ContainerStarted","Data":"d1a32ab52fae3e82517704bd2e7b3639b786900be8b9cbf032db15f93b814978"} Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.051716 5065 generic.go:334] "Generic (PLEG): container finished" podID="a86a58b3-276e-4dc0-96ea-b9d75f6e48c4" containerID="dbeb728c095d1a4f766fd0ec012ec6be147cd0597e1ac2d0a832007dba303de8" exitCode=0 Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.051877 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e7e9-account-create-8545w" event={"ID":"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4","Type":"ContainerDied","Data":"dbeb728c095d1a4f766fd0ec012ec6be147cd0597e1ac2d0a832007dba303de8"} Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.051902 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e7e9-account-create-8545w" event={"ID":"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4","Type":"ContainerStarted","Data":"30b0e7794d9eb2ce06cc692067b7c6239d89a57f98828e3c25e3c4f153073185"} Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.053485 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ca6e-account-create-r89cv" event={"ID":"b21ff978-d4cd-4a8f-a2b5-85990d9a3517","Type":"ContainerStarted","Data":"aafaa3692038375179cdf4db4ef846fd65c61c87d975a97dbb14f8eaed77caef"} Oct 08 13:36:25 crc kubenswrapper[5065]: I1008 13:36:25.176820 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795846498c-xgcvb"] Oct 08 13:36:25 crc kubenswrapper[5065]: W1008 13:36:25.240204 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd5f555_7bfb_40ad_8bc2_465f83d669f5.slice/crio-2eca047406a3906da63106681078b4c0c1f0faaf291c70496f2c4c7b1e12e35b WatchSource:0}: Error finding container 2eca047406a3906da63106681078b4c0c1f0faaf291c70496f2c4c7b1e12e35b: Status 404 returned error can't find the container with id 2eca047406a3906da63106681078b4c0c1f0faaf291c70496f2c4c7b1e12e35b Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.064366 5065 generic.go:334] "Generic (PLEG): container finished" podID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerID="ef74f6dfc09253680561eedebf8ccf25045a66132d83db063845075aae56f394" exitCode=0 Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.064455 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-xgcvb" event={"ID":"9cd5f555-7bfb-40ad-8bc2-465f83d669f5","Type":"ContainerDied","Data":"ef74f6dfc09253680561eedebf8ccf25045a66132d83db063845075aae56f394"} Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.065079 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-xgcvb" event={"ID":"9cd5f555-7bfb-40ad-8bc2-465f83d669f5","Type":"ContainerStarted","Data":"2eca047406a3906da63106681078b4c0c1f0faaf291c70496f2c4c7b1e12e35b"} Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.068388 5065 generic.go:334] "Generic (PLEG): container finished" podID="b21ff978-d4cd-4a8f-a2b5-85990d9a3517" containerID="8e0abc75176f532b668e3aba769afb7b2bfe0a652480850844af97de5ed2dc38" exitCode=0 Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.068533 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ca6e-account-create-r89cv" event={"ID":"b21ff978-d4cd-4a8f-a2b5-85990d9a3517","Type":"ContainerDied","Data":"8e0abc75176f532b668e3aba769afb7b2bfe0a652480850844af97de5ed2dc38"} Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.540675 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e7e9-account-create-8545w" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.556995 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5412-account-create-2vrxz" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.562186 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.644175 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2mbm\" (UniqueName: \"kubernetes.io/projected/a86a58b3-276e-4dc0-96ea-b9d75f6e48c4-kube-api-access-f2mbm\") pod \"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4\" (UID: \"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4\") " Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.649705 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86a58b3-276e-4dc0-96ea-b9d75f6e48c4-kube-api-access-f2mbm" (OuterVolumeSpecName: "kube-api-access-f2mbm") pod "a86a58b3-276e-4dc0-96ea-b9d75f6e48c4" (UID: "a86a58b3-276e-4dc0-96ea-b9d75f6e48c4"). InnerVolumeSpecName "kube-api-access-f2mbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.745742 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-combined-ca-bundle\") pod \"0e2de891-07e9-44cf-aa13-593ecc5f571a\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.745992 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-config-data\") pod \"0e2de891-07e9-44cf-aa13-593ecc5f571a\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.746019 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n56hd\" (UniqueName: \"kubernetes.io/projected/7df6bcfb-1378-4a89-a8d4-fe66dd35f072-kube-api-access-n56hd\") pod \"7df6bcfb-1378-4a89-a8d4-fe66dd35f072\" (UID: \"7df6bcfb-1378-4a89-a8d4-fe66dd35f072\") " Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.746077 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rcq2\" (UniqueName: \"kubernetes.io/projected/0e2de891-07e9-44cf-aa13-593ecc5f571a-kube-api-access-6rcq2\") pod \"0e2de891-07e9-44cf-aa13-593ecc5f571a\" (UID: \"0e2de891-07e9-44cf-aa13-593ecc5f571a\") " Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.746505 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2mbm\" (UniqueName: \"kubernetes.io/projected/a86a58b3-276e-4dc0-96ea-b9d75f6e48c4-kube-api-access-f2mbm\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.750708 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df6bcfb-1378-4a89-a8d4-fe66dd35f072-kube-api-access-n56hd" (OuterVolumeSpecName: "kube-api-access-n56hd") pod "7df6bcfb-1378-4a89-a8d4-fe66dd35f072" (UID: "7df6bcfb-1378-4a89-a8d4-fe66dd35f072"). InnerVolumeSpecName "kube-api-access-n56hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.758631 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2de891-07e9-44cf-aa13-593ecc5f571a-kube-api-access-6rcq2" (OuterVolumeSpecName: "kube-api-access-6rcq2") pod "0e2de891-07e9-44cf-aa13-593ecc5f571a" (UID: "0e2de891-07e9-44cf-aa13-593ecc5f571a"). InnerVolumeSpecName "kube-api-access-6rcq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.768935 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e2de891-07e9-44cf-aa13-593ecc5f571a" (UID: "0e2de891-07e9-44cf-aa13-593ecc5f571a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.795743 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-config-data" (OuterVolumeSpecName: "config-data") pod "0e2de891-07e9-44cf-aa13-593ecc5f571a" (UID: "0e2de891-07e9-44cf-aa13-593ecc5f571a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.847428 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.847628 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n56hd\" (UniqueName: \"kubernetes.io/projected/7df6bcfb-1378-4a89-a8d4-fe66dd35f072-kube-api-access-n56hd\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.847706 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rcq2\" (UniqueName: \"kubernetes.io/projected/0e2de891-07e9-44cf-aa13-593ecc5f571a-kube-api-access-6rcq2\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:26 crc kubenswrapper[5065]: I1008 13:36:26.847755 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e2de891-07e9-44cf-aa13-593ecc5f571a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.080557 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5412-account-create-2vrxz" event={"ID":"7df6bcfb-1378-4a89-a8d4-fe66dd35f072","Type":"ContainerDied","Data":"d1a32ab52fae3e82517704bd2e7b3639b786900be8b9cbf032db15f93b814978"} Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.080593 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a32ab52fae3e82517704bd2e7b3639b786900be8b9cbf032db15f93b814978" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.080646 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5412-account-create-2vrxz" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.084679 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e7e9-account-create-8545w" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.084739 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e7e9-account-create-8545w" event={"ID":"a86a58b3-276e-4dc0-96ea-b9d75f6e48c4","Type":"ContainerDied","Data":"30b0e7794d9eb2ce06cc692067b7c6239d89a57f98828e3c25e3c4f153073185"} Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.084760 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b0e7794d9eb2ce06cc692067b7c6239d89a57f98828e3c25e3c4f153073185" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.086847 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76dpp" event={"ID":"0e2de891-07e9-44cf-aa13-593ecc5f571a","Type":"ContainerDied","Data":"f260b7a8cb13103d68e2c3897f5e0b2f41073498f9f8edf49daa5030a7b5215f"} Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.086877 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f260b7a8cb13103d68e2c3897f5e0b2f41073498f9f8edf49daa5030a7b5215f" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.086934 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76dpp" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.089237 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-xgcvb" event={"ID":"9cd5f555-7bfb-40ad-8bc2-465f83d669f5","Type":"ContainerStarted","Data":"b10433ae652c1cfa2298bf8bc715112bae35b74d9afad1321cdc2e4e4200de69"} Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.089548 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.110719 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795846498c-xgcvb" podStartSLOduration=3.110695556 podStartE2EDuration="3.110695556s" podCreationTimestamp="2025-10-08 13:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:27.106192706 +0000 UTC m=+1088.883574463" watchObservedRunningTime="2025-10-08 13:36:27.110695556 +0000 UTC m=+1088.888077313" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.279117 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7v2k4"] Oct 08 13:36:27 crc kubenswrapper[5065]: E1008 13:36:27.279443 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2de891-07e9-44cf-aa13-593ecc5f571a" containerName="keystone-db-sync" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.279455 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2de891-07e9-44cf-aa13-593ecc5f571a" containerName="keystone-db-sync" Oct 08 13:36:27 crc kubenswrapper[5065]: E1008 13:36:27.279477 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86a58b3-276e-4dc0-96ea-b9d75f6e48c4" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.279483 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86a58b3-276e-4dc0-96ea-b9d75f6e48c4" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: E1008 13:36:27.279502 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df6bcfb-1378-4a89-a8d4-fe66dd35f072" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.279509 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df6bcfb-1378-4a89-a8d4-fe66dd35f072" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.279732 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2de891-07e9-44cf-aa13-593ecc5f571a" containerName="keystone-db-sync" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.279748 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df6bcfb-1378-4a89-a8d4-fe66dd35f072" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.279761 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86a58b3-276e-4dc0-96ea-b9d75f6e48c4" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.280296 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.284019 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.284339 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.284387 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.287234 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795846498c-xgcvb"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.287562 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j9fw4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.328964 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7v2k4"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.349607 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.351429 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357492 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-svc\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357544 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357597 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hslgf\" (UniqueName: \"kubernetes.io/projected/85a66a90-c9f4-468c-845a-d4877023f501-kube-api-access-hslgf\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357636 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-credential-keys\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357672 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357705 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-fernet-keys\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357752 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-config-data\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357773 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-config\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357802 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-scripts\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357826 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357968 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rz8t\" (UniqueName: \"kubernetes.io/projected/0b342099-e797-4386-9726-06e15cfb589c-kube-api-access-6rz8t\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.357993 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-combined-ca-bundle\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.376330 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.458870 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hslgf\" (UniqueName: \"kubernetes.io/projected/85a66a90-c9f4-468c-845a-d4877023f501-kube-api-access-hslgf\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.458922 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-credential-keys\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.458947 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.458971 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-fernet-keys\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459005 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-config\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459022 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-config-data\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459043 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-scripts\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459059 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459085 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rz8t\" (UniqueName: \"kubernetes.io/projected/0b342099-e797-4386-9726-06e15cfb589c-kube-api-access-6rz8t\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459100 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-combined-ca-bundle\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459131 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-svc\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459148 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.459946 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.461987 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-config\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.462977 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.463728 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.466928 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-svc\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.467990 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-combined-ca-bundle\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.473991 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-fernet-keys\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.476707 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-scripts\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.485366 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rz8t\" (UniqueName: \"kubernetes.io/projected/0b342099-e797-4386-9726-06e15cfb589c-kube-api-access-6rz8t\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.488211 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-config-data\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.490749 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-credential-keys\") pod \"keystone-bootstrap-7v2k4\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.506678 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hslgf\" (UniqueName: \"kubernetes.io/projected/85a66a90-c9f4-468c-845a-d4877023f501-kube-api-access-hslgf\") pod \"dnsmasq-dns-6b4bfdd7f7-pg5nc\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.516430 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.527038 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.544765 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.546638 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.546842 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.561471 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-config-data\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.561534 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-scripts\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.561570 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.561663 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-run-httpd\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.561686 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwns\" (UniqueName: \"kubernetes.io/projected/8849d3af-fdf6-4ec0-a66f-58da38c924f5-kube-api-access-brwns\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.561739 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-log-httpd\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.561778 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.623036 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.662728 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-run-httpd\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.662770 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwns\" (UniqueName: \"kubernetes.io/projected/8849d3af-fdf6-4ec0-a66f-58da38c924f5-kube-api-access-brwns\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.662807 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-log-httpd\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.662839 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.662871 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-config-data\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.662896 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-scripts\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.662918 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.667684 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.668107 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-log-httpd\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.668309 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-run-httpd\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.673687 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.674312 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-config-data\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.675102 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-scripts\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.692585 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwns\" (UniqueName: \"kubernetes.io/projected/8849d3af-fdf6-4ec0-a66f-58da38c924f5-kube-api-access-brwns\") pod \"ceilometer-0\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.696371 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hmh6z"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.697405 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.720919 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca6e-account-create-r89cv" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.732984 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.742063 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.745195 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.746607 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-d7bc8" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.750574 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.802867 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hmh6z"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.814883 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-jpkmx"] Oct 08 13:36:27 crc kubenswrapper[5065]: E1008 13:36:27.815278 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21ff978-d4cd-4a8f-a2b5-85990d9a3517" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.815291 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21ff978-d4cd-4a8f-a2b5-85990d9a3517" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.815464 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21ff978-d4cd-4a8f-a2b5-85990d9a3517" containerName="mariadb-account-create" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.816378 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.857381 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-jpkmx"] Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.885803 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7h28\" (UniqueName: \"kubernetes.io/projected/b21ff978-d4cd-4a8f-a2b5-85990d9a3517-kube-api-access-c7h28\") pod \"b21ff978-d4cd-4a8f-a2b5-85990d9a3517\" (UID: \"b21ff978-d4cd-4a8f-a2b5-85990d9a3517\") " Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.886019 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxhvg\" (UniqueName: \"kubernetes.io/projected/12f7954b-c868-4e42-80d4-b7285d4f20e5-kube-api-access-rxhvg\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.886145 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-scripts\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.886169 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-combined-ca-bundle\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.886223 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-config-data\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.886249 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12f7954b-c868-4e42-80d4-b7285d4f20e5-logs\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.893324 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21ff978-d4cd-4a8f-a2b5-85990d9a3517-kube-api-access-c7h28" (OuterVolumeSpecName: "kube-api-access-c7h28") pod "b21ff978-d4cd-4a8f-a2b5-85990d9a3517" (UID: "b21ff978-d4cd-4a8f-a2b5-85990d9a3517"). InnerVolumeSpecName "kube-api-access-c7h28". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.987855 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.988882 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcvqj\" (UniqueName: \"kubernetes.io/projected/b37536e2-bf39-4698-ace5-c7d2755306c0-kube-api-access-gcvqj\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.988957 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.988989 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-config-data\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989022 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12f7954b-c868-4e42-80d4-b7285d4f20e5-logs\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989053 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989081 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-config\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989105 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxhvg\" (UniqueName: \"kubernetes.io/projected/12f7954b-c868-4e42-80d4-b7285d4f20e5-kube-api-access-rxhvg\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989197 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989226 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989257 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-scripts\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989464 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-combined-ca-bundle\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989610 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12f7954b-c868-4e42-80d4-b7285d4f20e5-logs\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.989614 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7h28\" (UniqueName: \"kubernetes.io/projected/b21ff978-d4cd-4a8f-a2b5-85990d9a3517-kube-api-access-c7h28\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:27 crc kubenswrapper[5065]: I1008 13:36:27.995917 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-scripts\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.001390 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-config-data\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.002899 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-combined-ca-bundle\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.013139 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxhvg\" (UniqueName: \"kubernetes.io/projected/12f7954b-c868-4e42-80d4-b7285d4f20e5-kube-api-access-rxhvg\") pod \"placement-db-sync-hmh6z\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.053104 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.091392 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.091472 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.091527 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcvqj\" (UniqueName: \"kubernetes.io/projected/b37536e2-bf39-4698-ace5-c7d2755306c0-kube-api-access-gcvqj\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.091583 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.091627 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.091654 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-config\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.092851 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.093025 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-config\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.093101 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.093607 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.093988 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.105864 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ca6e-account-create-r89cv" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.107077 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ca6e-account-create-r89cv" event={"ID":"b21ff978-d4cd-4a8f-a2b5-85990d9a3517","Type":"ContainerDied","Data":"aafaa3692038375179cdf4db4ef846fd65c61c87d975a97dbb14f8eaed77caef"} Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.107105 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafaa3692038375179cdf4db4ef846fd65c61c87d975a97dbb14f8eaed77caef" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.113435 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcvqj\" (UniqueName: \"kubernetes.io/projected/b37536e2-bf39-4698-ace5-c7d2755306c0-kube-api-access-gcvqj\") pod \"dnsmasq-dns-5dc68bd5-jpkmx\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.155207 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.198657 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7v2k4"] Oct 08 13:36:28 crc kubenswrapper[5065]: W1008 13:36:28.333467 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85a66a90_c9f4_468c_845a_d4877023f501.slice/crio-a3bccdf74027f73049124642483e3efb36eb6dd353cea3ef583dd828bf7add5c WatchSource:0}: Error finding container a3bccdf74027f73049124642483e3efb36eb6dd353cea3ef583dd828bf7add5c: Status 404 returned error can't find the container with id a3bccdf74027f73049124642483e3efb36eb6dd353cea3ef583dd828bf7add5c Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.340398 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.458491 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.460716 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.464163 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.464343 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jcsf2" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.464541 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.465718 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.470928 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.477855 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.543176 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-jpkmx"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.570013 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.573683 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.577378 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.577622 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.600697 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605236 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605264 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-logs\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605283 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605318 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605338 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605434 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605500 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.605824 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgkt\" (UniqueName: \"kubernetes.io/projected/95ba406a-612c-413a-8271-a4de5a63488d-kube-api-access-qcgkt\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: W1008 13:36:28.606794 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12f7954b_c868_4e42_80d4_b7285d4f20e5.slice/crio-163d5dd919424aad86b98de6d9144bc08e468398b1d06138bbe8f569f693c547 WatchSource:0}: Error finding container 163d5dd919424aad86b98de6d9144bc08e468398b1d06138bbe8f569f693c547: Status 404 returned error can't find the container with id 163d5dd919424aad86b98de6d9144bc08e468398b1d06138bbe8f569f693c547 Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.610188 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hmh6z"] Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710045 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710096 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-logs\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710118 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710145 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-config-data\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710186 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710224 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710345 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710471 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710521 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710555 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710653 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-logs\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710707 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-scripts\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710753 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710794 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.711821 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.710652 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.715143 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcgkt\" (UniqueName: \"kubernetes.io/projected/95ba406a-612c-413a-8271-a4de5a63488d-kube-api-access-qcgkt\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.715231 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zldqq\" (UniqueName: \"kubernetes.io/projected/f60d1424-d26c-4ae2-b2c7-418691101303-kube-api-access-zldqq\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.715752 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-logs\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.721346 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.726142 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.726321 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.729862 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.742297 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcgkt\" (UniqueName: \"kubernetes.io/projected/95ba406a-612c-413a-8271-a4de5a63488d-kube-api-access-qcgkt\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.762833 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819142 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-config-data\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819516 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819540 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819569 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-logs\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819593 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-scripts\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819614 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819634 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.819688 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zldqq\" (UniqueName: \"kubernetes.io/projected/f60d1424-d26c-4ae2-b2c7-418691101303-kube-api-access-zldqq\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.820286 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-logs\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.820277 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.820364 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.823811 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-scripts\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.824099 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-config-data\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.830236 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.830252 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.851810 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zldqq\" (UniqueName: \"kubernetes.io/projected/f60d1424-d26c-4ae2-b2c7-418691101303-kube-api-access-zldqq\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:28 crc kubenswrapper[5065]: I1008 13:36:28.858319 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.011792 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jhs7h"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.012879 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.019714 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.020087 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.021329 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-px6gq" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.030182 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jhs7h"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.039853 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.069816 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.130230 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-scripts\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.130290 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a257f6-4b74-429b-9da0-b76051265822-etc-machine-id\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.130348 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-db-sync-config-data\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.130450 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtbz\" (UniqueName: \"kubernetes.io/projected/c5a257f6-4b74-429b-9da0-b76051265822-kube-api-access-7mtbz\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.130520 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-config-data\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.130555 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-combined-ca-bundle\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.145607 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hmh6z" event={"ID":"12f7954b-c868-4e42-80d4-b7285d4f20e5","Type":"ContainerStarted","Data":"163d5dd919424aad86b98de6d9144bc08e468398b1d06138bbe8f569f693c547"} Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.147249 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7v2k4" event={"ID":"0b342099-e797-4386-9726-06e15cfb589c","Type":"ContainerStarted","Data":"3950fc88600878c7ce01c00b14045b44eb1ff97659411df7323c5cc0768183e6"} Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.161325 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" event={"ID":"b37536e2-bf39-4698-ace5-c7d2755306c0","Type":"ContainerStarted","Data":"0994a306d6c4f384d3652150bb824cec0200b9b64c7e29e778e74fa820af27e3"} Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.163200 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerStarted","Data":"cb8351155cf1aa8c2c261515cb436b4b3256edd710555615b66d6d012fe71e0b"} Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.164520 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" event={"ID":"85a66a90-c9f4-468c-845a-d4877023f501","Type":"ContainerStarted","Data":"a3bccdf74027f73049124642483e3efb36eb6dd353cea3ef583dd828bf7add5c"} Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.164683 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795846498c-xgcvb" podUID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerName="dnsmasq-dns" containerID="cri-o://b10433ae652c1cfa2298bf8bc715112bae35b74d9afad1321cdc2e4e4200de69" gracePeriod=10 Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.232945 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-combined-ca-bundle\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.233039 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-scripts\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.233069 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a257f6-4b74-429b-9da0-b76051265822-etc-machine-id\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.233101 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-db-sync-config-data\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.233136 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mtbz\" (UniqueName: \"kubernetes.io/projected/c5a257f6-4b74-429b-9da0-b76051265822-kube-api-access-7mtbz\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.233176 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-config-data\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.234031 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a257f6-4b74-429b-9da0-b76051265822-etc-machine-id\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.244005 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-db-sync-config-data\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.245076 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-scripts\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.255769 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-config-data\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.261045 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mtbz\" (UniqueName: \"kubernetes.io/projected/c5a257f6-4b74-429b-9da0-b76051265822-kube-api-access-7mtbz\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.263299 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-combined-ca-bundle\") pod \"cinder-db-sync-jhs7h\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.268820 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-d7qm9"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.270190 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.273402 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vjnn6" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.273677 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.291329 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d7qm9"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.341028 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.442442 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-db-sync-config-data\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.442586 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb72f\" (UniqueName: \"kubernetes.io/projected/70fd4e43-69f4-482f-a374-2b8074e6a1d7-kube-api-access-tb72f\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.442925 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-combined-ca-bundle\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.491538 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vb8jx"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.493238 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.496398 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vb8jx"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.497639 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n7lsb" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.497894 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.498071 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.544934 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-db-sync-config-data\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.545004 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb72f\" (UniqueName: \"kubernetes.io/projected/70fd4e43-69f4-482f-a374-2b8074e6a1d7-kube-api-access-tb72f\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.545132 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-combined-ca-bundle\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.552478 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-combined-ca-bundle\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.554864 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-db-sync-config-data\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.573471 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb72f\" (UniqueName: \"kubernetes.io/projected/70fd4e43-69f4-482f-a374-2b8074e6a1d7-kube-api-access-tb72f\") pod \"barbican-db-sync-d7qm9\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.592970 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.647837 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.649308 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49cfd\" (UniqueName: \"kubernetes.io/projected/323689c5-d75e-44c2-aa45-3728bea780ff-kube-api-access-49cfd\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.649876 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-combined-ca-bundle\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.652148 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-config\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.754369 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-combined-ca-bundle\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.754452 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-config\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.754485 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49cfd\" (UniqueName: \"kubernetes.io/projected/323689c5-d75e-44c2-aa45-3728bea780ff-kube-api-access-49cfd\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.766878 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-config\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.770182 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-combined-ca-bundle\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.770188 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.780710 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49cfd\" (UniqueName: \"kubernetes.io/projected/323689c5-d75e-44c2-aa45-3728bea780ff-kube-api-access-49cfd\") pod \"neutron-db-sync-vb8jx\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.821037 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:36:29 crc kubenswrapper[5065]: I1008 13:36:29.899746 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jhs7h"] Oct 08 13:36:29 crc kubenswrapper[5065]: W1008 13:36:29.908203 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a257f6_4b74_429b_9da0_b76051265822.slice/crio-a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158 WatchSource:0}: Error finding container a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158: Status 404 returned error can't find the container with id a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158 Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.078047 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d7qm9"] Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.188443 5065 generic.go:334] "Generic (PLEG): container finished" podID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerID="cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311" exitCode=0 Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.188546 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" event={"ID":"b37536e2-bf39-4698-ace5-c7d2755306c0","Type":"ContainerDied","Data":"cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.195730 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d7qm9" event={"ID":"70fd4e43-69f4-482f-a374-2b8074e6a1d7","Type":"ContainerStarted","Data":"18f21067bc9465852b3d26fd1af5c0fda2274042004002886c07170525d38af5"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.198698 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jhs7h" event={"ID":"c5a257f6-4b74-429b-9da0-b76051265822","Type":"ContainerStarted","Data":"a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.210065 5065 generic.go:334] "Generic (PLEG): container finished" podID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerID="b10433ae652c1cfa2298bf8bc715112bae35b74d9afad1321cdc2e4e4200de69" exitCode=0 Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.210142 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-xgcvb" event={"ID":"9cd5f555-7bfb-40ad-8bc2-465f83d669f5","Type":"ContainerDied","Data":"b10433ae652c1cfa2298bf8bc715112bae35b74d9afad1321cdc2e4e4200de69"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.211566 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f60d1424-d26c-4ae2-b2c7-418691101303","Type":"ContainerStarted","Data":"3824f96e619151b82d25297eaa1f57efa29227593371e361884142f2b45a464f"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.229831 5065 generic.go:334] "Generic (PLEG): container finished" podID="85a66a90-c9f4-468c-845a-d4877023f501" containerID="ab43ff6b0f17b496454ce879565dd9758068e96afd44502f6547cd661fc7e06d" exitCode=0 Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.229900 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" event={"ID":"85a66a90-c9f4-468c-845a-d4877023f501","Type":"ContainerDied","Data":"ab43ff6b0f17b496454ce879565dd9758068e96afd44502f6547cd661fc7e06d"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.239653 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95ba406a-612c-413a-8271-a4de5a63488d","Type":"ContainerStarted","Data":"857a659d6438dd3395e728962ab196355cb8bcb904e943ac2745535e2852a9ea"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.244586 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7v2k4" event={"ID":"0b342099-e797-4386-9726-06e15cfb589c","Type":"ContainerStarted","Data":"8f9a531617fee5e3dd656a56664dbd0591c6269efbd6664e714313e687b6be8f"} Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.280295 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7v2k4" podStartSLOduration=3.280276701 podStartE2EDuration="3.280276701s" podCreationTimestamp="2025-10-08 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:30.271032255 +0000 UTC m=+1092.048414042" watchObservedRunningTime="2025-10-08 13:36:30.280276701 +0000 UTC m=+1092.057658458" Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.347486 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vb8jx"] Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.687700 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.789318 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-nb\") pod \"85a66a90-c9f4-468c-845a-d4877023f501\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.789431 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-config\") pod \"85a66a90-c9f4-468c-845a-d4877023f501\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.789584 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-svc\") pod \"85a66a90-c9f4-468c-845a-d4877023f501\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.789628 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-sb\") pod \"85a66a90-c9f4-468c-845a-d4877023f501\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.789654 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hslgf\" (UniqueName: \"kubernetes.io/projected/85a66a90-c9f4-468c-845a-d4877023f501-kube-api-access-hslgf\") pod \"85a66a90-c9f4-468c-845a-d4877023f501\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.789710 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-swift-storage-0\") pod \"85a66a90-c9f4-468c-845a-d4877023f501\" (UID: \"85a66a90-c9f4-468c-845a-d4877023f501\") " Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.812032 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a66a90-c9f4-468c-845a-d4877023f501-kube-api-access-hslgf" (OuterVolumeSpecName: "kube-api-access-hslgf") pod "85a66a90-c9f4-468c-845a-d4877023f501" (UID: "85a66a90-c9f4-468c-845a-d4877023f501"). InnerVolumeSpecName "kube-api-access-hslgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.848490 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.892195 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hslgf\" (UniqueName: \"kubernetes.io/projected/85a66a90-c9f4-468c-845a-d4877023f501-kube-api-access-hslgf\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.975643 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85a66a90-c9f4-468c-845a-d4877023f501" (UID: "85a66a90-c9f4-468c-845a-d4877023f501"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:30 crc kubenswrapper[5065]: I1008 13:36:30.994000 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.031914 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85a66a90-c9f4-468c-845a-d4877023f501" (UID: "85a66a90-c9f4-468c-845a-d4877023f501"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.056326 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-config" (OuterVolumeSpecName: "config") pod "85a66a90-c9f4-468c-845a-d4877023f501" (UID: "85a66a90-c9f4-468c-845a-d4877023f501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.061014 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85a66a90-c9f4-468c-845a-d4877023f501" (UID: "85a66a90-c9f4-468c-845a-d4877023f501"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.067915 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "85a66a90-c9f4-468c-845a-d4877023f501" (UID: "85a66a90-c9f4-468c-845a-d4877023f501"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.098891 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.098969 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.105673 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.105711 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.105720 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.105737 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a66a90-c9f4-468c-845a-d4877023f501-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.118860 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.207062 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-svc\") pod \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.207218 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-nb\") pod \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.207273 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-swift-storage-0\") pod \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.207307 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-sb\") pod \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.207345 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bmqk\" (UniqueName: \"kubernetes.io/projected/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-kube-api-access-8bmqk\") pod \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.207403 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-config\") pod \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\" (UID: \"9cd5f555-7bfb-40ad-8bc2-465f83d669f5\") " Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.217174 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-kube-api-access-8bmqk" (OuterVolumeSpecName: "kube-api-access-8bmqk") pod "9cd5f555-7bfb-40ad-8bc2-465f83d669f5" (UID: "9cd5f555-7bfb-40ad-8bc2-465f83d669f5"). InnerVolumeSpecName "kube-api-access-8bmqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.258377 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795846498c-xgcvb" event={"ID":"9cd5f555-7bfb-40ad-8bc2-465f83d669f5","Type":"ContainerDied","Data":"2eca047406a3906da63106681078b4c0c1f0faaf291c70496f2c4c7b1e12e35b"} Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.258441 5065 scope.go:117] "RemoveContainer" containerID="b10433ae652c1cfa2298bf8bc715112bae35b74d9afad1321cdc2e4e4200de69" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.258543 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795846498c-xgcvb" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.261355 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f60d1424-d26c-4ae2-b2c7-418691101303","Type":"ContainerStarted","Data":"6090e802ca2b9a791d3e4ba4e54e3c6345b0806ac119b8eeef830a8c6e509d04"} Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.266050 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-config" (OuterVolumeSpecName: "config") pod "9cd5f555-7bfb-40ad-8bc2-465f83d669f5" (UID: "9cd5f555-7bfb-40ad-8bc2-465f83d669f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.267241 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9cd5f555-7bfb-40ad-8bc2-465f83d669f5" (UID: "9cd5f555-7bfb-40ad-8bc2-465f83d669f5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.267320 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.267345 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc" event={"ID":"85a66a90-c9f4-468c-845a-d4877023f501","Type":"ContainerDied","Data":"a3bccdf74027f73049124642483e3efb36eb6dd353cea3ef583dd828bf7add5c"} Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.284074 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" event={"ID":"b37536e2-bf39-4698-ace5-c7d2755306c0","Type":"ContainerStarted","Data":"64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e"} Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.284525 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.286556 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vb8jx" event={"ID":"323689c5-d75e-44c2-aa45-3728bea780ff","Type":"ContainerStarted","Data":"261c0ec977b629677c045bdc2ee5e30127513fae7112384f6c4b0f456d3559ae"} Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.290906 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cd5f555-7bfb-40ad-8bc2-465f83d669f5" (UID: "9cd5f555-7bfb-40ad-8bc2-465f83d669f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.293396 5065 scope.go:117] "RemoveContainer" containerID="ef74f6dfc09253680561eedebf8ccf25045a66132d83db063845075aae56f394" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.295854 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9cd5f555-7bfb-40ad-8bc2-465f83d669f5" (UID: "9cd5f555-7bfb-40ad-8bc2-465f83d669f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.302601 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9cd5f555-7bfb-40ad-8bc2-465f83d669f5" (UID: "9cd5f555-7bfb-40ad-8bc2-465f83d669f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.308920 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.308964 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.309027 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.309052 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bmqk\" (UniqueName: \"kubernetes.io/projected/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-kube-api-access-8bmqk\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.309066 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.309079 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd5f555-7bfb-40ad-8bc2-465f83d669f5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.310595 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" podStartSLOduration=4.310559121 podStartE2EDuration="4.310559121s" podCreationTimestamp="2025-10-08 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:31.308232019 +0000 UTC m=+1093.085613776" watchObservedRunningTime="2025-10-08 13:36:31.310559121 +0000 UTC m=+1093.087940878" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.328722 5065 scope.go:117] "RemoveContainer" containerID="ab43ff6b0f17b496454ce879565dd9758068e96afd44502f6547cd661fc7e06d" Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.369703 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc"] Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.380139 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b4bfdd7f7-pg5nc"] Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.666506 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795846498c-xgcvb"] Oct 08 13:36:31 crc kubenswrapper[5065]: I1008 13:36:31.676102 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795846498c-xgcvb"] Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.312934 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95ba406a-612c-413a-8271-a4de5a63488d","Type":"ContainerStarted","Data":"1a6a6b06ad2d93b5ce5d17373e66db84a71acfdeff0ee2183cedf52bacccb7f9"} Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.313219 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95ba406a-612c-413a-8271-a4de5a63488d","Type":"ContainerStarted","Data":"52b4a1e571050250fcccdd0e86addffa00e382cff803c9e59ccf72b27f869227"} Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.313115 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-httpd" containerID="cri-o://1a6a6b06ad2d93b5ce5d17373e66db84a71acfdeff0ee2183cedf52bacccb7f9" gracePeriod=30 Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.313084 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-log" containerID="cri-o://52b4a1e571050250fcccdd0e86addffa00e382cff803c9e59ccf72b27f869227" gracePeriod=30 Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.315935 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vb8jx" event={"ID":"323689c5-d75e-44c2-aa45-3728bea780ff","Type":"ContainerStarted","Data":"16bba1324f0442b219591e075d3110a041e45127e394932b36be8c1e6f2c21f2"} Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.326953 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f60d1424-d26c-4ae2-b2c7-418691101303","Type":"ContainerStarted","Data":"5ffac99aad50949e57390777e162ffcbe4ca98b91e71a8383e699b9bae970b0e"} Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.327138 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-log" containerID="cri-o://6090e802ca2b9a791d3e4ba4e54e3c6345b0806ac119b8eeef830a8c6e509d04" gracePeriod=30 Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.327226 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-httpd" containerID="cri-o://5ffac99aad50949e57390777e162ffcbe4ca98b91e71a8383e699b9bae970b0e" gracePeriod=30 Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.340767 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.340748879 podStartE2EDuration="5.340748879s" podCreationTimestamp="2025-10-08 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:32.335846688 +0000 UTC m=+1094.113228435" watchObservedRunningTime="2025-10-08 13:36:32.340748879 +0000 UTC m=+1094.118130636" Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.355031 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.355010378 podStartE2EDuration="5.355010378s" podCreationTimestamp="2025-10-08 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:32.353612471 +0000 UTC m=+1094.130994228" watchObservedRunningTime="2025-10-08 13:36:32.355010378 +0000 UTC m=+1094.132392135" Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.384660 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vb8jx" podStartSLOduration=3.384644057 podStartE2EDuration="3.384644057s" podCreationTimestamp="2025-10-08 13:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:32.379513521 +0000 UTC m=+1094.156895278" watchObservedRunningTime="2025-10-08 13:36:32.384644057 +0000 UTC m=+1094.162025814" Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.901791 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a66a90-c9f4-468c-845a-d4877023f501" path="/var/lib/kubelet/pods/85a66a90-c9f4-468c-845a-d4877023f501/volumes" Oct 08 13:36:32 crc kubenswrapper[5065]: I1008 13:36:32.902829 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" path="/var/lib/kubelet/pods/9cd5f555-7bfb-40ad-8bc2-465f83d669f5/volumes" Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.340465 5065 generic.go:334] "Generic (PLEG): container finished" podID="f60d1424-d26c-4ae2-b2c7-418691101303" containerID="5ffac99aad50949e57390777e162ffcbe4ca98b91e71a8383e699b9bae970b0e" exitCode=143 Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.340496 5065 generic.go:334] "Generic (PLEG): container finished" podID="f60d1424-d26c-4ae2-b2c7-418691101303" containerID="6090e802ca2b9a791d3e4ba4e54e3c6345b0806ac119b8eeef830a8c6e509d04" exitCode=143 Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.340544 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f60d1424-d26c-4ae2-b2c7-418691101303","Type":"ContainerDied","Data":"5ffac99aad50949e57390777e162ffcbe4ca98b91e71a8383e699b9bae970b0e"} Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.340609 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f60d1424-d26c-4ae2-b2c7-418691101303","Type":"ContainerDied","Data":"6090e802ca2b9a791d3e4ba4e54e3c6345b0806ac119b8eeef830a8c6e509d04"} Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.342983 5065 generic.go:334] "Generic (PLEG): container finished" podID="95ba406a-612c-413a-8271-a4de5a63488d" containerID="1a6a6b06ad2d93b5ce5d17373e66db84a71acfdeff0ee2183cedf52bacccb7f9" exitCode=143 Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.343008 5065 generic.go:334] "Generic (PLEG): container finished" podID="95ba406a-612c-413a-8271-a4de5a63488d" containerID="52b4a1e571050250fcccdd0e86addffa00e382cff803c9e59ccf72b27f869227" exitCode=143 Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.343057 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95ba406a-612c-413a-8271-a4de5a63488d","Type":"ContainerDied","Data":"1a6a6b06ad2d93b5ce5d17373e66db84a71acfdeff0ee2183cedf52bacccb7f9"} Oct 08 13:36:33 crc kubenswrapper[5065]: I1008 13:36:33.343102 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95ba406a-612c-413a-8271-a4de5a63488d","Type":"ContainerDied","Data":"52b4a1e571050250fcccdd0e86addffa00e382cff803c9e59ccf72b27f869227"} Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.347667 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.371083 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95ba406a-612c-413a-8271-a4de5a63488d","Type":"ContainerDied","Data":"857a659d6438dd3395e728962ab196355cb8bcb904e943ac2745535e2852a9ea"} Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.371131 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.371150 5065 scope.go:117] "RemoveContainer" containerID="1a6a6b06ad2d93b5ce5d17373e66db84a71acfdeff0ee2183cedf52bacccb7f9" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491299 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-logs\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491341 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcgkt\" (UniqueName: \"kubernetes.io/projected/95ba406a-612c-413a-8271-a4de5a63488d-kube-api-access-qcgkt\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491375 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491491 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-config-data\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491589 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-internal-tls-certs\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491646 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-scripts\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491737 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-httpd-run\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.491762 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-combined-ca-bundle\") pod \"95ba406a-612c-413a-8271-a4de5a63488d\" (UID: \"95ba406a-612c-413a-8271-a4de5a63488d\") " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.492386 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-logs" (OuterVolumeSpecName: "logs") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.492423 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.497686 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ba406a-612c-413a-8271-a4de5a63488d-kube-api-access-qcgkt" (OuterVolumeSpecName: "kube-api-access-qcgkt") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "kube-api-access-qcgkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.498795 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-scripts" (OuterVolumeSpecName: "scripts") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.499327 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.522103 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.550118 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-config-data" (OuterVolumeSpecName: "config-data") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.558586 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "95ba406a-612c-413a-8271-a4de5a63488d" (UID: "95ba406a-612c-413a-8271-a4de5a63488d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.593979 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.594013 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.594028 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.594038 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.594049 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ba406a-612c-413a-8271-a4de5a63488d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.594060 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95ba406a-612c-413a-8271-a4de5a63488d-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.594070 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcgkt\" (UniqueName: \"kubernetes.io/projected/95ba406a-612c-413a-8271-a4de5a63488d-kube-api-access-qcgkt\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.594122 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.615195 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.695914 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.702788 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.708847 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.729985 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:35 crc kubenswrapper[5065]: E1008 13:36:35.730364 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerName="dnsmasq-dns" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.730378 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerName="dnsmasq-dns" Oct 08 13:36:35 crc kubenswrapper[5065]: E1008 13:36:35.730402 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerName="init" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.730408 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerName="init" Oct 08 13:36:35 crc kubenswrapper[5065]: E1008 13:36:35.732679 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a66a90-c9f4-468c-845a-d4877023f501" containerName="init" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.732688 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a66a90-c9f4-468c-845a-d4877023f501" containerName="init" Oct 08 13:36:35 crc kubenswrapper[5065]: E1008 13:36:35.732696 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-httpd" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.732712 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-httpd" Oct 08 13:36:35 crc kubenswrapper[5065]: E1008 13:36:35.732719 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-log" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.732725 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-log" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.732972 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-log" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.732984 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a66a90-c9f4-468c-845a-d4877023f501" containerName="init" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.732999 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ba406a-612c-413a-8271-a4de5a63488d" containerName="glance-httpd" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.733013 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd5f555-7bfb-40ad-8bc2-465f83d669f5" containerName="dnsmasq-dns" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.734033 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.736354 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.736934 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.757099 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797000 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797052 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797196 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797258 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797288 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5kj\" (UniqueName: \"kubernetes.io/projected/7e25f362-2a7f-48cb-b91f-18938713da5b-kube-api-access-5l5kj\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797320 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797373 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.797403 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899439 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899549 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899627 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899740 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899771 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5kj\" (UniqueName: \"kubernetes.io/projected/7e25f362-2a7f-48cb-b91f-18938713da5b-kube-api-access-5l5kj\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899797 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899834 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.899859 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.900322 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.900605 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.902716 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.905568 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.906479 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.909784 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.912876 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.922184 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5kj\" (UniqueName: \"kubernetes.io/projected/7e25f362-2a7f-48cb-b91f-18938713da5b-kube-api-access-5l5kj\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:35 crc kubenswrapper[5065]: I1008 13:36:35.934659 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:36:36 crc kubenswrapper[5065]: I1008 13:36:36.061465 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:36 crc kubenswrapper[5065]: I1008 13:36:36.885767 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ba406a-612c-413a-8271-a4de5a63488d" path="/var/lib/kubelet/pods/95ba406a-612c-413a-8271-a4de5a63488d/volumes" Oct 08 13:36:37 crc kubenswrapper[5065]: I1008 13:36:37.389789 5065 generic.go:334] "Generic (PLEG): container finished" podID="0b342099-e797-4386-9726-06e15cfb589c" containerID="8f9a531617fee5e3dd656a56664dbd0591c6269efbd6664e714313e687b6be8f" exitCode=0 Oct 08 13:36:37 crc kubenswrapper[5065]: I1008 13:36:37.389831 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7v2k4" event={"ID":"0b342099-e797-4386-9726-06e15cfb589c","Type":"ContainerDied","Data":"8f9a531617fee5e3dd656a56664dbd0591c6269efbd6664e714313e687b6be8f"} Oct 08 13:36:38 crc kubenswrapper[5065]: I1008 13:36:38.156604 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:36:38 crc kubenswrapper[5065]: I1008 13:36:38.215702 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-qmlbs"] Oct 08 13:36:38 crc kubenswrapper[5065]: I1008 13:36:38.215995 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="dnsmasq-dns" containerID="cri-o://6995322149d4c44f0e2438c7519c2d4be6214bb33c70ab5eb2f51b08a0de9638" gracePeriod=10 Oct 08 13:36:38 crc kubenswrapper[5065]: I1008 13:36:38.412875 5065 generic.go:334] "Generic (PLEG): container finished" podID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerID="6995322149d4c44f0e2438c7519c2d4be6214bb33c70ab5eb2f51b08a0de9638" exitCode=0 Oct 08 13:36:38 crc kubenswrapper[5065]: I1008 13:36:38.412997 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" event={"ID":"739c4d4d-fa0c-49c1-b435-606f3eb19f49","Type":"ContainerDied","Data":"6995322149d4c44f0e2438c7519c2d4be6214bb33c70ab5eb2f51b08a0de9638"} Oct 08 13:36:39 crc kubenswrapper[5065]: I1008 13:36:39.464600 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Oct 08 13:36:39 crc kubenswrapper[5065]: I1008 13:36:39.910171 5065 scope.go:117] "RemoveContainer" containerID="52b4a1e571050250fcccdd0e86addffa00e382cff803c9e59ccf72b27f869227" Oct 08 13:36:39 crc kubenswrapper[5065]: I1008 13:36:39.983920 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073132 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-combined-ca-bundle\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073250 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-scripts\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073294 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-httpd-run\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073340 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073376 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zldqq\" (UniqueName: \"kubernetes.io/projected/f60d1424-d26c-4ae2-b2c7-418691101303-kube-api-access-zldqq\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073409 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-logs\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073467 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-config-data\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.073494 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-public-tls-certs\") pod \"f60d1424-d26c-4ae2-b2c7-418691101303\" (UID: \"f60d1424-d26c-4ae2-b2c7-418691101303\") " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.075574 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-logs" (OuterVolumeSpecName: "logs") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.075667 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.080022 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.080370 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-scripts" (OuterVolumeSpecName: "scripts") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.084161 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60d1424-d26c-4ae2-b2c7-418691101303-kube-api-access-zldqq" (OuterVolumeSpecName: "kube-api-access-zldqq") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "kube-api-access-zldqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.101386 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.121929 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-config-data" (OuterVolumeSpecName: "config-data") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.124346 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f60d1424-d26c-4ae2-b2c7-418691101303" (UID: "f60d1424-d26c-4ae2-b2c7-418691101303"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175170 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175204 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175236 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175246 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zldqq\" (UniqueName: \"kubernetes.io/projected/f60d1424-d26c-4ae2-b2c7-418691101303-kube-api-access-zldqq\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175258 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60d1424-d26c-4ae2-b2c7-418691101303-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175266 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175274 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.175282 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60d1424-d26c-4ae2-b2c7-418691101303-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.194008 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.276991 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.431211 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.431205 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f60d1424-d26c-4ae2-b2c7-418691101303","Type":"ContainerDied","Data":"3824f96e619151b82d25297eaa1f57efa29227593371e361884142f2b45a464f"} Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.464838 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.473440 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.486713 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:40 crc kubenswrapper[5065]: E1008 13:36:40.487172 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-httpd" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.487196 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-httpd" Oct 08 13:36:40 crc kubenswrapper[5065]: E1008 13:36:40.487210 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-log" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.487216 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-log" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.487394 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-log" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.487425 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" containerName="glance-httpd" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.488479 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.495260 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.495289 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.496239 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583061 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583152 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-logs\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583185 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-scripts\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583222 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583246 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-config-data\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583326 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583397 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.583450 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd58l\" (UniqueName: \"kubernetes.io/projected/372e941a-3d7d-49ee-84c1-9d3d159d603f-kube-api-access-xd58l\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.685029 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.685200 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.685290 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd58l\" (UniqueName: \"kubernetes.io/projected/372e941a-3d7d-49ee-84c1-9d3d159d603f-kube-api-access-xd58l\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.685403 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.685650 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.686286 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.686846 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-logs\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.686951 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-logs\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.687024 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-scripts\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.687084 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.687146 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-config-data\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.692007 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-scripts\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.692219 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.692540 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.695270 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-config-data\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.709391 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd58l\" (UniqueName: \"kubernetes.io/projected/372e941a-3d7d-49ee-84c1-9d3d159d603f-kube-api-access-xd58l\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.720745 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.817733 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:36:40 crc kubenswrapper[5065]: I1008 13:36:40.887077 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60d1424-d26c-4ae2-b2c7-418691101303" path="/var/lib/kubelet/pods/f60d1424-d26c-4ae2-b2c7-418691101303/volumes" Oct 08 13:36:44 crc kubenswrapper[5065]: I1008 13:36:44.464243 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.465077 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.465693 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.817270 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.986474 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-fernet-keys\") pod \"0b342099-e797-4386-9726-06e15cfb589c\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.986539 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-config-data\") pod \"0b342099-e797-4386-9726-06e15cfb589c\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.986620 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-scripts\") pod \"0b342099-e797-4386-9726-06e15cfb589c\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.986689 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-combined-ca-bundle\") pod \"0b342099-e797-4386-9726-06e15cfb589c\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.986721 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-credential-keys\") pod \"0b342099-e797-4386-9726-06e15cfb589c\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.986820 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rz8t\" (UniqueName: \"kubernetes.io/projected/0b342099-e797-4386-9726-06e15cfb589c-kube-api-access-6rz8t\") pod \"0b342099-e797-4386-9726-06e15cfb589c\" (UID: \"0b342099-e797-4386-9726-06e15cfb589c\") " Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.996751 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-scripts" (OuterVolumeSpecName: "scripts") pod "0b342099-e797-4386-9726-06e15cfb589c" (UID: "0b342099-e797-4386-9726-06e15cfb589c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.997026 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0b342099-e797-4386-9726-06e15cfb589c" (UID: "0b342099-e797-4386-9726-06e15cfb589c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:49 crc kubenswrapper[5065]: I1008 13:36:49.997105 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0b342099-e797-4386-9726-06e15cfb589c" (UID: "0b342099-e797-4386-9726-06e15cfb589c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.014545 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b342099-e797-4386-9726-06e15cfb589c-kube-api-access-6rz8t" (OuterVolumeSpecName: "kube-api-access-6rz8t") pod "0b342099-e797-4386-9726-06e15cfb589c" (UID: "0b342099-e797-4386-9726-06e15cfb589c"). InnerVolumeSpecName "kube-api-access-6rz8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.016214 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-config-data" (OuterVolumeSpecName: "config-data") pod "0b342099-e797-4386-9726-06e15cfb589c" (UID: "0b342099-e797-4386-9726-06e15cfb589c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.030035 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b342099-e797-4386-9726-06e15cfb589c" (UID: "0b342099-e797-4386-9726-06e15cfb589c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.089432 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.089472 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.089481 5065 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.089491 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rz8t\" (UniqueName: \"kubernetes.io/projected/0b342099-e797-4386-9726-06e15cfb589c-kube-api-access-6rz8t\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.089503 5065 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.089511 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b342099-e797-4386-9726-06e15cfb589c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.529026 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7v2k4" event={"ID":"0b342099-e797-4386-9726-06e15cfb589c","Type":"ContainerDied","Data":"3950fc88600878c7ce01c00b14045b44eb1ff97659411df7323c5cc0768183e6"} Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.529305 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3950fc88600878c7ce01c00b14045b44eb1ff97659411df7323c5cc0768183e6" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.529089 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7v2k4" Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.985276 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7v2k4"] Oct 08 13:36:50 crc kubenswrapper[5065]: I1008 13:36:50.998283 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7v2k4"] Oct 08 13:36:51 crc kubenswrapper[5065]: E1008 13:36:51.043779 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384" Oct 08 13:36:51 crc kubenswrapper[5065]: E1008 13:36:51.044034 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb72f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-d7qm9_openstack(70fd4e43-69f4-482f-a374-2b8074e6a1d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 13:36:51 crc kubenswrapper[5065]: E1008 13:36:51.045250 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-d7qm9" podUID="70fd4e43-69f4-482f-a374-2b8074e6a1d7" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.060923 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fnmr5"] Oct 08 13:36:51 crc kubenswrapper[5065]: E1008 13:36:51.062498 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b342099-e797-4386-9726-06e15cfb589c" containerName="keystone-bootstrap" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.062529 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b342099-e797-4386-9726-06e15cfb589c" containerName="keystone-bootstrap" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.064191 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b342099-e797-4386-9726-06e15cfb589c" containerName="keystone-bootstrap" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.065768 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.071205 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.072151 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.072242 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j9fw4" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.072611 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.081182 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fnmr5"] Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.225426 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-combined-ca-bundle\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.225682 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-credential-keys\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.225759 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-fernet-keys\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.225830 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwlmt\" (UniqueName: \"kubernetes.io/projected/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-kube-api-access-qwlmt\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.225918 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-config-data\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.226012 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-scripts\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.328554 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-combined-ca-bundle\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.328657 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-credential-keys\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.328715 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-fernet-keys\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.328746 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwlmt\" (UniqueName: \"kubernetes.io/projected/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-kube-api-access-qwlmt\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.328815 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-config-data\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.328891 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-scripts\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.334292 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-fernet-keys\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.334973 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-credential-keys\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.343709 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-combined-ca-bundle\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.344265 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-config-data\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.345064 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-scripts\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.348632 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwlmt\" (UniqueName: \"kubernetes.io/projected/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-kube-api-access-qwlmt\") pod \"keystone-bootstrap-fnmr5\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: I1008 13:36:51.439083 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:51 crc kubenswrapper[5065]: E1008 13:36:51.540995 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384\\\"\"" pod="openstack/barbican-db-sync-d7qm9" podUID="70fd4e43-69f4-482f-a374-2b8074e6a1d7" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.197304 5065 scope.go:117] "RemoveContainer" containerID="5ffac99aad50949e57390777e162ffcbe4ca98b91e71a8383e699b9bae970b0e" Oct 08 13:36:52 crc kubenswrapper[5065]: E1008 13:36:52.237678 5065 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f" Oct 08 13:36:52 crc kubenswrapper[5065]: E1008 13:36:52.237857 5065 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mtbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jhs7h_openstack(c5a257f6-4b74-429b-9da0-b76051265822): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 13:36:52 crc kubenswrapper[5065]: E1008 13:36:52.239034 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jhs7h" podUID="c5a257f6-4b74-429b-9da0-b76051265822" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.412894 5065 scope.go:117] "RemoveContainer" containerID="6090e802ca2b9a791d3e4ba4e54e3c6345b0806ac119b8eeef830a8c6e509d04" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.543094 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.559571 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" event={"ID":"739c4d4d-fa0c-49c1-b435-606f3eb19f49","Type":"ContainerDied","Data":"7fcd42507dc5932ca138c0ecfef6e0118948e5e8fce673eae7467f925d263eb3"} Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.559595 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-qmlbs" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.571207 5065 scope.go:117] "RemoveContainer" containerID="6995322149d4c44f0e2438c7519c2d4be6214bb33c70ab5eb2f51b08a0de9638" Oct 08 13:36:52 crc kubenswrapper[5065]: E1008 13:36:52.571296 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f\\\"\"" pod="openstack/cinder-db-sync-jhs7h" podUID="c5a257f6-4b74-429b-9da0-b76051265822" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.613496 5065 scope.go:117] "RemoveContainer" containerID="c59de23f50a83f04333b2fc7ff02b7e119d4785656ce4f2dc74280bd36131a45" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.652976 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-sb\") pod \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.653020 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7vb8\" (UniqueName: \"kubernetes.io/projected/739c4d4d-fa0c-49c1-b435-606f3eb19f49-kube-api-access-b7vb8\") pod \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.653071 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-nb\") pod \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.653161 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-svc\") pod \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.653239 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-config\") pod \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.653271 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-swift-storage-0\") pod \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\" (UID: \"739c4d4d-fa0c-49c1-b435-606f3eb19f49\") " Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.659957 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739c4d4d-fa0c-49c1-b435-606f3eb19f49-kube-api-access-b7vb8" (OuterVolumeSpecName: "kube-api-access-b7vb8") pod "739c4d4d-fa0c-49c1-b435-606f3eb19f49" (UID: "739c4d4d-fa0c-49c1-b435-606f3eb19f49"). InnerVolumeSpecName "kube-api-access-b7vb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.704181 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "739c4d4d-fa0c-49c1-b435-606f3eb19f49" (UID: "739c4d4d-fa0c-49c1-b435-606f3eb19f49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.704672 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "739c4d4d-fa0c-49c1-b435-606f3eb19f49" (UID: "739c4d4d-fa0c-49c1-b435-606f3eb19f49"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.714079 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "739c4d4d-fa0c-49c1-b435-606f3eb19f49" (UID: "739c4d4d-fa0c-49c1-b435-606f3eb19f49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.715101 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-config" (OuterVolumeSpecName: "config") pod "739c4d4d-fa0c-49c1-b435-606f3eb19f49" (UID: "739c4d4d-fa0c-49c1-b435-606f3eb19f49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.715932 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "739c4d4d-fa0c-49c1-b435-606f3eb19f49" (UID: "739c4d4d-fa0c-49c1-b435-606f3eb19f49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.756384 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.756434 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.756449 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.756459 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7vb8\" (UniqueName: \"kubernetes.io/projected/739c4d4d-fa0c-49c1-b435-606f3eb19f49-kube-api-access-b7vb8\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.756470 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.756477 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/739c4d4d-fa0c-49c1-b435-606f3eb19f49-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.764471 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fnmr5"] Oct 08 13:36:52 crc kubenswrapper[5065]: W1008 13:36:52.768365 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35c0afa0_44d8_4e3c_9ba7_e09d5b08dbc2.slice/crio-9206f7fc8cab5bb0a23e9d19a65f0b18d73d81318b9d5774d8aebf3968c29395 WatchSource:0}: Error finding container 9206f7fc8cab5bb0a23e9d19a65f0b18d73d81318b9d5774d8aebf3968c29395: Status 404 returned error can't find the container with id 9206f7fc8cab5bb0a23e9d19a65f0b18d73d81318b9d5774d8aebf3968c29395 Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.885867 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b342099-e797-4386-9726-06e15cfb589c" path="/var/lib/kubelet/pods/0b342099-e797-4386-9726-06e15cfb589c/volumes" Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.906435 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-qmlbs"] Oct 08 13:36:52 crc kubenswrapper[5065]: I1008 13:36:52.915116 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-qmlbs"] Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.505847 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:36:53 crc kubenswrapper[5065]: W1008 13:36:53.511719 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e25f362_2a7f_48cb_b91f_18938713da5b.slice/crio-2af75fa2a068a9d70c7f2353568ac194a6979ec123e3fb9de5ff48d904aafda2 WatchSource:0}: Error finding container 2af75fa2a068a9d70c7f2353568ac194a6979ec123e3fb9de5ff48d904aafda2: Status 404 returned error can't find the container with id 2af75fa2a068a9d70c7f2353568ac194a6979ec123e3fb9de5ff48d904aafda2 Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.572902 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fnmr5" event={"ID":"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2","Type":"ContainerStarted","Data":"aacb565316241f85f3d1791edc7c96769deec36cdab2eb82624ae8b5b5f4a50f"} Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.573208 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fnmr5" event={"ID":"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2","Type":"ContainerStarted","Data":"9206f7fc8cab5bb0a23e9d19a65f0b18d73d81318b9d5774d8aebf3968c29395"} Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.587379 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerStarted","Data":"276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d"} Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.595528 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e25f362-2a7f-48cb-b91f-18938713da5b","Type":"ContainerStarted","Data":"2af75fa2a068a9d70c7f2353568ac194a6979ec123e3fb9de5ff48d904aafda2"} Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.599304 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hmh6z" event={"ID":"12f7954b-c868-4e42-80d4-b7285d4f20e5","Type":"ContainerStarted","Data":"a0226c22195545becdc2de24894b17738f864dbedcd9a6e394274a3f8d7be4b3"} Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.601607 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fnmr5" podStartSLOduration=2.601592548 podStartE2EDuration="2.601592548s" podCreationTimestamp="2025-10-08 13:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:53.592258911 +0000 UTC m=+1115.369640708" watchObservedRunningTime="2025-10-08 13:36:53.601592548 +0000 UTC m=+1115.378974305" Oct 08 13:36:53 crc kubenswrapper[5065]: I1008 13:36:53.631605 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hmh6z" podStartSLOduration=3.068069591 podStartE2EDuration="26.631555014s" podCreationTimestamp="2025-10-08 13:36:27 +0000 UTC" firstStartedPulling="2025-10-08 13:36:28.620529523 +0000 UTC m=+1090.397911280" lastFinishedPulling="2025-10-08 13:36:52.184014946 +0000 UTC m=+1113.961396703" observedRunningTime="2025-10-08 13:36:53.616700414 +0000 UTC m=+1115.394082191" watchObservedRunningTime="2025-10-08 13:36:53.631555014 +0000 UTC m=+1115.408936771" Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.035467 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:36:54 crc kubenswrapper[5065]: W1008 13:36:54.085530 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod372e941a_3d7d_49ee_84c1_9d3d159d603f.slice/crio-98aa51f6c3b6c9e45f7a9b1963524bc8d86236beeeb2e40b40c92ea6500b2e6c WatchSource:0}: Error finding container 98aa51f6c3b6c9e45f7a9b1963524bc8d86236beeeb2e40b40c92ea6500b2e6c: Status 404 returned error can't find the container with id 98aa51f6c3b6c9e45f7a9b1963524bc8d86236beeeb2e40b40c92ea6500b2e6c Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.375259 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.375327 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.375425 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.376156 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31f1099402b40e4377d6225bd79cd57be8759f2926970d8fbf7335327beefc81"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.376228 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://31f1099402b40e4377d6225bd79cd57be8759f2926970d8fbf7335327beefc81" gracePeriod=600 Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.609095 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerStarted","Data":"734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c"} Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.611290 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"372e941a-3d7d-49ee-84c1-9d3d159d603f","Type":"ContainerStarted","Data":"98aa51f6c3b6c9e45f7a9b1963524bc8d86236beeeb2e40b40c92ea6500b2e6c"} Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.615052 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="31f1099402b40e4377d6225bd79cd57be8759f2926970d8fbf7335327beefc81" exitCode=0 Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.615092 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"31f1099402b40e4377d6225bd79cd57be8759f2926970d8fbf7335327beefc81"} Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.615162 5065 scope.go:117] "RemoveContainer" containerID="f1a1c08caf1f5c5ebf44b5caec0b83171c54c6a08c4b6c83a6707f77736bc763" Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.623349 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e25f362-2a7f-48cb-b91f-18938713da5b","Type":"ContainerStarted","Data":"23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db"} Oct 08 13:36:54 crc kubenswrapper[5065]: I1008 13:36:54.885475 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" path="/var/lib/kubelet/pods/739c4d4d-fa0c-49c1-b435-606f3eb19f49/volumes" Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.636529 5065 generic.go:334] "Generic (PLEG): container finished" podID="12f7954b-c868-4e42-80d4-b7285d4f20e5" containerID="a0226c22195545becdc2de24894b17738f864dbedcd9a6e394274a3f8d7be4b3" exitCode=0 Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.636635 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hmh6z" event={"ID":"12f7954b-c868-4e42-80d4-b7285d4f20e5","Type":"ContainerDied","Data":"a0226c22195545becdc2de24894b17738f864dbedcd9a6e394274a3f8d7be4b3"} Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.638938 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"372e941a-3d7d-49ee-84c1-9d3d159d603f","Type":"ContainerStarted","Data":"0adb44177ccce437d423f3ea76c3e1102bc6b162fc2fc3fd423fd84cf51206b4"} Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.638973 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"372e941a-3d7d-49ee-84c1-9d3d159d603f","Type":"ContainerStarted","Data":"0d1e6f22995ab2c22af8be1cb1d3697d181e18f85709f9cc2b049586cc48bc20"} Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.643797 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"5fd12d0a8c18886d62fe0f77c00a82717c3aaf19bdc8e84b083c3e64ad847f5b"} Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.645895 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e25f362-2a7f-48cb-b91f-18938713da5b","Type":"ContainerStarted","Data":"a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23"} Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.689183 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.689161476 podStartE2EDuration="20.689161476s" podCreationTimestamp="2025-10-08 13:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:55.675672835 +0000 UTC m=+1117.453054592" watchObservedRunningTime="2025-10-08 13:36:55.689161476 +0000 UTC m=+1117.466543253" Oct 08 13:36:55 crc kubenswrapper[5065]: I1008 13:36:55.719067 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.71904721 podStartE2EDuration="15.71904721s" podCreationTimestamp="2025-10-08 13:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:36:55.716037567 +0000 UTC m=+1117.493419324" watchObservedRunningTime="2025-10-08 13:36:55.71904721 +0000 UTC m=+1117.496428967" Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.062652 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.062703 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.092503 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.125172 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.654734 5065 generic.go:334] "Generic (PLEG): container finished" podID="35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" containerID="aacb565316241f85f3d1791edc7c96769deec36cdab2eb82624ae8b5b5f4a50f" exitCode=0 Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.654776 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fnmr5" event={"ID":"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2","Type":"ContainerDied","Data":"aacb565316241f85f3d1791edc7c96769deec36cdab2eb82624ae8b5b5f4a50f"} Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.656094 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:56 crc kubenswrapper[5065]: I1008 13:36:56.656119 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.012564 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.136753 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12f7954b-c868-4e42-80d4-b7285d4f20e5-logs\") pod \"12f7954b-c868-4e42-80d4-b7285d4f20e5\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.137183 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12f7954b-c868-4e42-80d4-b7285d4f20e5-logs" (OuterVolumeSpecName: "logs") pod "12f7954b-c868-4e42-80d4-b7285d4f20e5" (UID: "12f7954b-c868-4e42-80d4-b7285d4f20e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.137196 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxhvg\" (UniqueName: \"kubernetes.io/projected/12f7954b-c868-4e42-80d4-b7285d4f20e5-kube-api-access-rxhvg\") pod \"12f7954b-c868-4e42-80d4-b7285d4f20e5\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.137311 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-combined-ca-bundle\") pod \"12f7954b-c868-4e42-80d4-b7285d4f20e5\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.137495 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-scripts\") pod \"12f7954b-c868-4e42-80d4-b7285d4f20e5\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.137535 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-config-data\") pod \"12f7954b-c868-4e42-80d4-b7285d4f20e5\" (UID: \"12f7954b-c868-4e42-80d4-b7285d4f20e5\") " Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.138834 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12f7954b-c868-4e42-80d4-b7285d4f20e5-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.143561 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-scripts" (OuterVolumeSpecName: "scripts") pod "12f7954b-c868-4e42-80d4-b7285d4f20e5" (UID: "12f7954b-c868-4e42-80d4-b7285d4f20e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.163793 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f7954b-c868-4e42-80d4-b7285d4f20e5-kube-api-access-rxhvg" (OuterVolumeSpecName: "kube-api-access-rxhvg") pod "12f7954b-c868-4e42-80d4-b7285d4f20e5" (UID: "12f7954b-c868-4e42-80d4-b7285d4f20e5"). InnerVolumeSpecName "kube-api-access-rxhvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.164339 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12f7954b-c868-4e42-80d4-b7285d4f20e5" (UID: "12f7954b-c868-4e42-80d4-b7285d4f20e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.166201 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-config-data" (OuterVolumeSpecName: "config-data") pod "12f7954b-c868-4e42-80d4-b7285d4f20e5" (UID: "12f7954b-c868-4e42-80d4-b7285d4f20e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.240308 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxhvg\" (UniqueName: \"kubernetes.io/projected/12f7954b-c868-4e42-80d4-b7285d4f20e5-kube-api-access-rxhvg\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.240341 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.240353 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.240361 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f7954b-c868-4e42-80d4-b7285d4f20e5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.671144 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hmh6z" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.671213 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hmh6z" event={"ID":"12f7954b-c868-4e42-80d4-b7285d4f20e5","Type":"ContainerDied","Data":"163d5dd919424aad86b98de6d9144bc08e468398b1d06138bbe8f569f693c547"} Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.671343 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="163d5dd919424aad86b98de6d9144bc08e468398b1d06138bbe8f569f693c547" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.741360 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-699db6b76b-fd9ls"] Oct 08 13:36:57 crc kubenswrapper[5065]: E1008 13:36:57.742189 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f7954b-c868-4e42-80d4-b7285d4f20e5" containerName="placement-db-sync" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.742210 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f7954b-c868-4e42-80d4-b7285d4f20e5" containerName="placement-db-sync" Oct 08 13:36:57 crc kubenswrapper[5065]: E1008 13:36:57.742225 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="init" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.742232 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="init" Oct 08 13:36:57 crc kubenswrapper[5065]: E1008 13:36:57.742279 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="dnsmasq-dns" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.742286 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="dnsmasq-dns" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.742551 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f7954b-c868-4e42-80d4-b7285d4f20e5" containerName="placement-db-sync" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.742570 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="739c4d4d-fa0c-49c1-b435-606f3eb19f49" containerName="dnsmasq-dns" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.744082 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.748068 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.748341 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.748518 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-d7bc8" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.748729 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.755466 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.781915 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-699db6b76b-fd9ls"] Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.850944 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-combined-ca-bundle\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.851023 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5926be-c223-4cbc-b6e3-a16726aa6c84-logs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.851050 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-scripts\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.851088 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-public-tls-certs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.851163 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-internal-tls-certs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.851287 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-config-data\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.851453 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnbv\" (UniqueName: \"kubernetes.io/projected/8c5926be-c223-4cbc-b6e3-a16726aa6c84-kube-api-access-mxnbv\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.952737 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnbv\" (UniqueName: \"kubernetes.io/projected/8c5926be-c223-4cbc-b6e3-a16726aa6c84-kube-api-access-mxnbv\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.952807 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-combined-ca-bundle\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.952885 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5926be-c223-4cbc-b6e3-a16726aa6c84-logs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.952919 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-scripts\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.952978 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-public-tls-certs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.953015 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-internal-tls-certs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.953085 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-config-data\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.956158 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5926be-c223-4cbc-b6e3-a16726aa6c84-logs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.958861 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-internal-tls-certs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.959610 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-public-tls-certs\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.960026 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-combined-ca-bundle\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.963360 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-config-data\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.966774 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-scripts\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:57 crc kubenswrapper[5065]: I1008 13:36:57.979315 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnbv\" (UniqueName: \"kubernetes.io/projected/8c5926be-c223-4cbc-b6e3-a16726aa6c84-kube-api-access-mxnbv\") pod \"placement-699db6b76b-fd9ls\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.079056 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.598238 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.676120 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-scripts\") pod \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.676168 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-config-data\") pod \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.676250 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-credential-keys\") pod \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.676273 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwlmt\" (UniqueName: \"kubernetes.io/projected/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-kube-api-access-qwlmt\") pod \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.676340 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-fernet-keys\") pod \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.676381 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-combined-ca-bundle\") pod \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\" (UID: \"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2\") " Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.680550 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" (UID: "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.680585 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" (UID: "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.683621 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-scripts" (OuterVolumeSpecName: "scripts") pod "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" (UID: "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.710282 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-kube-api-access-qwlmt" (OuterVolumeSpecName: "kube-api-access-qwlmt") pod "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" (UID: "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2"). InnerVolumeSpecName "kube-api-access-qwlmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.716457 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fnmr5" event={"ID":"35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2","Type":"ContainerDied","Data":"9206f7fc8cab5bb0a23e9d19a65f0b18d73d81318b9d5774d8aebf3968c29395"} Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.716497 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9206f7fc8cab5bb0a23e9d19a65f0b18d73d81318b9d5774d8aebf3968c29395" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.716578 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fnmr5" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.728346 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-config-data" (OuterVolumeSpecName: "config-data") pod "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" (UID: "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.736806 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" (UID: "35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.779993 5065 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.780241 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwlmt\" (UniqueName: \"kubernetes.io/projected/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-kube-api-access-qwlmt\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.780311 5065 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.780367 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.780546 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.780613 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.805746 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-77965b6945-w5rpz"] Oct 08 13:36:58 crc kubenswrapper[5065]: E1008 13:36:58.806211 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" containerName="keystone-bootstrap" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.806238 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" containerName="keystone-bootstrap" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.806463 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" containerName="keystone-bootstrap" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.807123 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.812274 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.815936 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.823060 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77965b6945-w5rpz"] Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886182 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-scripts\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886225 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-combined-ca-bundle\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886277 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-public-tls-certs\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886319 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv274\" (UniqueName: \"kubernetes.io/projected/0643aa92-2649-4c41-b16e-9a05aac93f35-kube-api-access-bv274\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886389 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-fernet-keys\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886463 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-credential-keys\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886486 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-config-data\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.886508 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-internal-tls-certs\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.990324 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-combined-ca-bundle\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.991296 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-public-tls-certs\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.991384 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv274\" (UniqueName: \"kubernetes.io/projected/0643aa92-2649-4c41-b16e-9a05aac93f35-kube-api-access-bv274\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.991459 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-fernet-keys\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.991611 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-credential-keys\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.991645 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-config-data\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.991675 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-internal-tls-certs\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.991705 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-scripts\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.994747 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-scripts\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.994887 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-combined-ca-bundle\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.996595 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-fernet-keys\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.998905 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-public-tls-certs\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:58 crc kubenswrapper[5065]: I1008 13:36:58.999908 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-internal-tls-certs\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:59 crc kubenswrapper[5065]: I1008 13:36:59.001186 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-config-data\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:59 crc kubenswrapper[5065]: I1008 13:36:59.010027 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-credential-keys\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:59 crc kubenswrapper[5065]: I1008 13:36:59.013609 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv274\" (UniqueName: \"kubernetes.io/projected/0643aa92-2649-4c41-b16e-9a05aac93f35-kube-api-access-bv274\") pod \"keystone-77965b6945-w5rpz\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:36:59 crc kubenswrapper[5065]: I1008 13:36:59.132041 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.637231 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-699db6b76b-fd9ls"] Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.715838 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77965b6945-w5rpz"] Oct 08 13:37:00 crc kubenswrapper[5065]: W1008 13:37:00.718628 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0643aa92_2649_4c41_b16e_9a05aac93f35.slice/crio-ad2dd9b9c6b73b0c678557c3cf3bffeb5056772e3cb8dbed1b75ac55b8668549 WatchSource:0}: Error finding container ad2dd9b9c6b73b0c678557c3cf3bffeb5056772e3cb8dbed1b75ac55b8668549: Status 404 returned error can't find the container with id ad2dd9b9c6b73b0c678557c3cf3bffeb5056772e3cb8dbed1b75ac55b8668549 Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.737736 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerStarted","Data":"8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa"} Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.739221 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699db6b76b-fd9ls" event={"ID":"8c5926be-c223-4cbc-b6e3-a16726aa6c84","Type":"ContainerStarted","Data":"2a0ef0905e6e5abfa7a4de4cafbad33dc6822a7a370ab033e241ac6c417803b7"} Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.740340 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77965b6945-w5rpz" event={"ID":"0643aa92-2649-4c41-b16e-9a05aac93f35","Type":"ContainerStarted","Data":"ad2dd9b9c6b73b0c678557c3cf3bffeb5056772e3cb8dbed1b75ac55b8668549"} Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.818030 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.818079 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.856119 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 13:37:00 crc kubenswrapper[5065]: I1008 13:37:00.872364 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.750683 5065 generic.go:334] "Generic (PLEG): container finished" podID="323689c5-d75e-44c2-aa45-3728bea780ff" containerID="16bba1324f0442b219591e075d3110a041e45127e394932b36be8c1e6f2c21f2" exitCode=0 Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.750889 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vb8jx" event={"ID":"323689c5-d75e-44c2-aa45-3728bea780ff","Type":"ContainerDied","Data":"16bba1324f0442b219591e075d3110a041e45127e394932b36be8c1e6f2c21f2"} Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.754673 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699db6b76b-fd9ls" event={"ID":"8c5926be-c223-4cbc-b6e3-a16726aa6c84","Type":"ContainerStarted","Data":"4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18"} Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.754717 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699db6b76b-fd9ls" event={"ID":"8c5926be-c223-4cbc-b6e3-a16726aa6c84","Type":"ContainerStarted","Data":"bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea"} Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.754774 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.754865 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.757151 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77965b6945-w5rpz" event={"ID":"0643aa92-2649-4c41-b16e-9a05aac93f35","Type":"ContainerStarted","Data":"c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267"} Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.757189 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.757561 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.757593 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.804713 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-699db6b76b-fd9ls" podStartSLOduration=4.804696069 podStartE2EDuration="4.804696069s" podCreationTimestamp="2025-10-08 13:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:01.792142173 +0000 UTC m=+1123.569523920" watchObservedRunningTime="2025-10-08 13:37:01.804696069 +0000 UTC m=+1123.582077826" Oct 08 13:37:01 crc kubenswrapper[5065]: I1008 13:37:01.809077 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-77965b6945-w5rpz" podStartSLOduration=3.80906701 podStartE2EDuration="3.80906701s" podCreationTimestamp="2025-10-08 13:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:01.807680001 +0000 UTC m=+1123.585061798" watchObservedRunningTime="2025-10-08 13:37:01.80906701 +0000 UTC m=+1123.586448767" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.105464 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.283220 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-config\") pod \"323689c5-d75e-44c2-aa45-3728bea780ff\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.283531 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-combined-ca-bundle\") pod \"323689c5-d75e-44c2-aa45-3728bea780ff\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.283757 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49cfd\" (UniqueName: \"kubernetes.io/projected/323689c5-d75e-44c2-aa45-3728bea780ff-kube-api-access-49cfd\") pod \"323689c5-d75e-44c2-aa45-3728bea780ff\" (UID: \"323689c5-d75e-44c2-aa45-3728bea780ff\") " Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.290979 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323689c5-d75e-44c2-aa45-3728bea780ff-kube-api-access-49cfd" (OuterVolumeSpecName: "kube-api-access-49cfd") pod "323689c5-d75e-44c2-aa45-3728bea780ff" (UID: "323689c5-d75e-44c2-aa45-3728bea780ff"). InnerVolumeSpecName "kube-api-access-49cfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.314552 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "323689c5-d75e-44c2-aa45-3728bea780ff" (UID: "323689c5-d75e-44c2-aa45-3728bea780ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.323633 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-config" (OuterVolumeSpecName: "config") pod "323689c5-d75e-44c2-aa45-3728bea780ff" (UID: "323689c5-d75e-44c2-aa45-3728bea780ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.388943 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.388988 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323689c5-d75e-44c2-aa45-3728bea780ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.389003 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49cfd\" (UniqueName: \"kubernetes.io/projected/323689c5-d75e-44c2-aa45-3728bea780ff-kube-api-access-49cfd\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.686848 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.709485 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.796488 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vb8jx" event={"ID":"323689c5-d75e-44c2-aa45-3728bea780ff","Type":"ContainerDied","Data":"261c0ec977b629677c045bdc2ee5e30127513fae7112384f6c4b0f456d3559ae"} Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.796613 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="261c0ec977b629677c045bdc2ee5e30127513fae7112384f6c4b0f456d3559ae" Oct 08 13:37:03 crc kubenswrapper[5065]: I1008 13:37:03.796572 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vb8jx" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.020865 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-lbrmn"] Oct 08 13:37:04 crc kubenswrapper[5065]: E1008 13:37:04.021193 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323689c5-d75e-44c2-aa45-3728bea780ff" containerName="neutron-db-sync" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.021209 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="323689c5-d75e-44c2-aa45-3728bea780ff" containerName="neutron-db-sync" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.021367 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="323689c5-d75e-44c2-aa45-3728bea780ff" containerName="neutron-db-sync" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.023979 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.049237 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-lbrmn"] Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.123832 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56c8fc79b6-9z5rs"] Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.128357 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.131047 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n7lsb" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.131347 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.131497 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.131610 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.139237 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56c8fc79b6-9z5rs"] Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.205070 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-sb\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.205128 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-swift-storage-0\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.205209 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skrhk\" (UniqueName: \"kubernetes.io/projected/e4512702-9289-4284-9325-674842084fc7-kube-api-access-skrhk\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.205463 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-nb\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.205520 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-svc\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.205731 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-config\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.307633 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7dh5\" (UniqueName: \"kubernetes.io/projected/62f8e978-1ba0-4348-9340-13a1e9083e02-kube-api-access-g7dh5\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.307694 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-ovndb-tls-certs\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.307732 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-nb\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.307781 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-config\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.307804 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-httpd-config\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.307834 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-svc\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.307861 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-combined-ca-bundle\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.308089 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-config\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.308182 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-sb\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.308208 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-swift-storage-0\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.308287 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skrhk\" (UniqueName: \"kubernetes.io/projected/e4512702-9289-4284-9325-674842084fc7-kube-api-access-skrhk\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.308821 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-svc\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.308849 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-nb\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.309143 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-sb\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.309194 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-config\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.309620 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-swift-storage-0\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.341595 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skrhk\" (UniqueName: \"kubernetes.io/projected/e4512702-9289-4284-9325-674842084fc7-kube-api-access-skrhk\") pod \"dnsmasq-dns-67b55c5465-lbrmn\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.342002 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.410439 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7dh5\" (UniqueName: \"kubernetes.io/projected/62f8e978-1ba0-4348-9340-13a1e9083e02-kube-api-access-g7dh5\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.410485 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-ovndb-tls-certs\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.410533 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-config\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.410552 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-httpd-config\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.411176 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-combined-ca-bundle\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.414009 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-config\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.414769 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-httpd-config\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.418765 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-ovndb-tls-certs\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.421270 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-combined-ca-bundle\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.428510 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7dh5\" (UniqueName: \"kubernetes.io/projected/62f8e978-1ba0-4348-9340-13a1e9083e02-kube-api-access-g7dh5\") pod \"neutron-56c8fc79b6-9z5rs\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.455940 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:04 crc kubenswrapper[5065]: I1008 13:37:04.824821 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-lbrmn"] Oct 08 13:37:05 crc kubenswrapper[5065]: I1008 13:37:05.828484 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" event={"ID":"e4512702-9289-4284-9325-674842084fc7","Type":"ContainerDied","Data":"826c106c7a29a3df7e958978c73c0f16d133fc4770415801ae84ff8eb1004905"} Oct 08 13:37:05 crc kubenswrapper[5065]: I1008 13:37:05.828318 5065 generic.go:334] "Generic (PLEG): container finished" podID="e4512702-9289-4284-9325-674842084fc7" containerID="826c106c7a29a3df7e958978c73c0f16d133fc4770415801ae84ff8eb1004905" exitCode=0 Oct 08 13:37:05 crc kubenswrapper[5065]: I1008 13:37:05.829249 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" event={"ID":"e4512702-9289-4284-9325-674842084fc7","Type":"ContainerStarted","Data":"c7a782b5263ace5f9ffc326e539d9fa82d79b6495e300b5bfe519dafb0d3962a"} Oct 08 13:37:05 crc kubenswrapper[5065]: I1008 13:37:05.909616 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56c8fc79b6-9z5rs"] Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.056436 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f88c4599-sd7mw"] Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.063004 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.066808 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.070137 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.070602 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f88c4599-sd7mw"] Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.242633 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/fd3f72f8-a569-409f-a590-02a0f7fcdc81-kube-api-access-hm6hf\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.242867 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-combined-ca-bundle\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.243036 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-internal-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.243086 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-public-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.243164 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-httpd-config\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.243235 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-ovndb-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.243314 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-config\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.344732 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/fd3f72f8-a569-409f-a590-02a0f7fcdc81-kube-api-access-hm6hf\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.344859 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-combined-ca-bundle\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.344920 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-internal-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.344939 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-public-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.344980 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-httpd-config\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.345021 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-ovndb-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.345061 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-config\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.350579 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-internal-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.351712 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-ovndb-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.354555 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-combined-ca-bundle\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.355101 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-public-tls-certs\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.356100 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-httpd-config\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.357832 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-config\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.369710 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/fd3f72f8-a569-409f-a590-02a0f7fcdc81-kube-api-access-hm6hf\") pod \"neutron-5f88c4599-sd7mw\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.428224 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.849008 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c8fc79b6-9z5rs" event={"ID":"62f8e978-1ba0-4348-9340-13a1e9083e02","Type":"ContainerStarted","Data":"40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8"} Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.849291 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c8fc79b6-9z5rs" event={"ID":"62f8e978-1ba0-4348-9340-13a1e9083e02","Type":"ContainerStarted","Data":"d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411"} Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.849313 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c8fc79b6-9z5rs" event={"ID":"62f8e978-1ba0-4348-9340-13a1e9083e02","Type":"ContainerStarted","Data":"86752f14c63cc747503461a2b205b1301d9a18f0a2e21e85dfb5aacab6aa0a86"} Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.850480 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.851807 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jhs7h" event={"ID":"c5a257f6-4b74-429b-9da0-b76051265822","Type":"ContainerStarted","Data":"50463d675c646987c36ceb2f2ed3b6f11964f4129f88e7663cf30f72d3c799c5"} Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.859699 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" event={"ID":"e4512702-9289-4284-9325-674842084fc7","Type":"ContainerStarted","Data":"d619bc60e14851813126cd50d3993edbece8e90cc2c7543c50d2dc75541139b2"} Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.860583 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.895094 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56c8fc79b6-9z5rs" podStartSLOduration=2.89507762 podStartE2EDuration="2.89507762s" podCreationTimestamp="2025-10-08 13:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:06.873850315 +0000 UTC m=+1128.651232072" watchObservedRunningTime="2025-10-08 13:37:06.89507762 +0000 UTC m=+1128.672459377" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.900077 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jhs7h" podStartSLOduration=3.06208632 podStartE2EDuration="38.900057327s" podCreationTimestamp="2025-10-08 13:36:28 +0000 UTC" firstStartedPulling="2025-10-08 13:36:29.910731833 +0000 UTC m=+1091.688113590" lastFinishedPulling="2025-10-08 13:37:05.74870284 +0000 UTC m=+1127.526084597" observedRunningTime="2025-10-08 13:37:06.892452447 +0000 UTC m=+1128.669834204" watchObservedRunningTime="2025-10-08 13:37:06.900057327 +0000 UTC m=+1128.677439084" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.923874 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" podStartSLOduration=3.923848272 podStartE2EDuration="3.923848272s" podCreationTimestamp="2025-10-08 13:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:06.914921736 +0000 UTC m=+1128.692303483" watchObservedRunningTime="2025-10-08 13:37:06.923848272 +0000 UTC m=+1128.701230039" Oct 08 13:37:06 crc kubenswrapper[5065]: I1008 13:37:06.990162 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f88c4599-sd7mw"] Oct 08 13:37:07 crc kubenswrapper[5065]: I1008 13:37:07.893682 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d7qm9" event={"ID":"70fd4e43-69f4-482f-a374-2b8074e6a1d7","Type":"ContainerStarted","Data":"1899c55c276614b9dc345b66b3715c43a46119b2159f018266e34050dda8d031"} Oct 08 13:37:07 crc kubenswrapper[5065]: I1008 13:37:07.896832 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f88c4599-sd7mw" event={"ID":"fd3f72f8-a569-409f-a590-02a0f7fcdc81","Type":"ContainerStarted","Data":"82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821"} Oct 08 13:37:07 crc kubenswrapper[5065]: I1008 13:37:07.902012 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:07 crc kubenswrapper[5065]: I1008 13:37:07.902239 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f88c4599-sd7mw" event={"ID":"fd3f72f8-a569-409f-a590-02a0f7fcdc81","Type":"ContainerStarted","Data":"12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8"} Oct 08 13:37:07 crc kubenswrapper[5065]: I1008 13:37:07.902320 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f88c4599-sd7mw" event={"ID":"fd3f72f8-a569-409f-a590-02a0f7fcdc81","Type":"ContainerStarted","Data":"f14c9aa44d8493f186222b9073cb137ce22c9868d2dcd170fb579a3f5108884e"} Oct 08 13:37:07 crc kubenswrapper[5065]: I1008 13:37:07.938922 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f88c4599-sd7mw" podStartSLOduration=1.938904215 podStartE2EDuration="1.938904215s" podCreationTimestamp="2025-10-08 13:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:07.935922583 +0000 UTC m=+1129.713304350" watchObservedRunningTime="2025-10-08 13:37:07.938904215 +0000 UTC m=+1129.716285972" Oct 08 13:37:07 crc kubenswrapper[5065]: I1008 13:37:07.940557 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-d7qm9" podStartSLOduration=2.022713378 podStartE2EDuration="38.940548601s" podCreationTimestamp="2025-10-08 13:36:29 +0000 UTC" firstStartedPulling="2025-10-08 13:36:30.125612854 +0000 UTC m=+1091.902994611" lastFinishedPulling="2025-10-08 13:37:07.043448057 +0000 UTC m=+1128.820829834" observedRunningTime="2025-10-08 13:37:07.915427019 +0000 UTC m=+1129.692808796" watchObservedRunningTime="2025-10-08 13:37:07.940548601 +0000 UTC m=+1129.717930358" Oct 08 13:37:08 crc kubenswrapper[5065]: I1008 13:37:08.257986 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:08 crc kubenswrapper[5065]: I1008 13:37:08.265979 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.343867 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.439689 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-jpkmx"] Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.439929 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" podUID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerName="dnsmasq-dns" containerID="cri-o://64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e" gracePeriod=10 Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.912865 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.971849 5065 generic.go:334] "Generic (PLEG): container finished" podID="70fd4e43-69f4-482f-a374-2b8074e6a1d7" containerID="1899c55c276614b9dc345b66b3715c43a46119b2159f018266e34050dda8d031" exitCode=0 Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.971904 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d7qm9" event={"ID":"70fd4e43-69f4-482f-a374-2b8074e6a1d7","Type":"ContainerDied","Data":"1899c55c276614b9dc345b66b3715c43a46119b2159f018266e34050dda8d031"} Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.976602 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerStarted","Data":"8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40"} Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.976727 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-central-agent" containerID="cri-o://276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d" gracePeriod=30 Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.976901 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="proxy-httpd" containerID="cri-o://8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40" gracePeriod=30 Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.976951 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="sg-core" containerID="cri-o://8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa" gracePeriod=30 Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.976983 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-notification-agent" containerID="cri-o://734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c" gracePeriod=30 Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.976898 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.981926 5065 generic.go:334] "Generic (PLEG): container finished" podID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerID="64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e" exitCode=0 Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.981957 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" event={"ID":"b37536e2-bf39-4698-ace5-c7d2755306c0","Type":"ContainerDied","Data":"64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e"} Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.981974 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" event={"ID":"b37536e2-bf39-4698-ace5-c7d2755306c0","Type":"ContainerDied","Data":"0994a306d6c4f384d3652150bb824cec0200b9b64c7e29e778e74fa820af27e3"} Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.981990 5065 scope.go:117] "RemoveContainer" containerID="64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e" Oct 08 13:37:14 crc kubenswrapper[5065]: I1008 13:37:14.982211 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-jpkmx" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.020676 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.605966042 podStartE2EDuration="48.020656375s" podCreationTimestamp="2025-10-08 13:36:27 +0000 UTC" firstStartedPulling="2025-10-08 13:36:28.489227538 +0000 UTC m=+1090.266609295" lastFinishedPulling="2025-10-08 13:37:13.903917871 +0000 UTC m=+1135.681299628" observedRunningTime="2025-10-08 13:37:15.014864356 +0000 UTC m=+1136.792246133" watchObservedRunningTime="2025-10-08 13:37:15.020656375 +0000 UTC m=+1136.798038132" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.023146 5065 scope.go:117] "RemoveContainer" containerID="cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.041640 5065 scope.go:117] "RemoveContainer" containerID="64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.042066 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-svc\") pod \"b37536e2-bf39-4698-ace5-c7d2755306c0\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.042138 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-config\") pod \"b37536e2-bf39-4698-ace5-c7d2755306c0\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.042161 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcvqj\" (UniqueName: \"kubernetes.io/projected/b37536e2-bf39-4698-ace5-c7d2755306c0-kube-api-access-gcvqj\") pod \"b37536e2-bf39-4698-ace5-c7d2755306c0\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.042209 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-nb\") pod \"b37536e2-bf39-4698-ace5-c7d2755306c0\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.042273 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-swift-storage-0\") pod \"b37536e2-bf39-4698-ace5-c7d2755306c0\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.042354 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-sb\") pod \"b37536e2-bf39-4698-ace5-c7d2755306c0\" (UID: \"b37536e2-bf39-4698-ace5-c7d2755306c0\") " Oct 08 13:37:15 crc kubenswrapper[5065]: E1008 13:37:15.043962 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e\": container with ID starting with 64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e not found: ID does not exist" containerID="64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.043999 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e"} err="failed to get container status \"64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e\": rpc error: code = NotFound desc = could not find container \"64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e\": container with ID starting with 64e43f66b9824ebbefe124763ee03223e5c39cbcc2de87cafa33226497ea768e not found: ID does not exist" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.044020 5065 scope.go:117] "RemoveContainer" containerID="cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311" Oct 08 13:37:15 crc kubenswrapper[5065]: E1008 13:37:15.044246 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311\": container with ID starting with cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311 not found: ID does not exist" containerID="cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.044279 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311"} err="failed to get container status \"cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311\": rpc error: code = NotFound desc = could not find container \"cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311\": container with ID starting with cc8cd6afc89a19767c8c125f86ef0f0c7538663941936385347a03076fabb311 not found: ID does not exist" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.048163 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37536e2-bf39-4698-ace5-c7d2755306c0-kube-api-access-gcvqj" (OuterVolumeSpecName: "kube-api-access-gcvqj") pod "b37536e2-bf39-4698-ace5-c7d2755306c0" (UID: "b37536e2-bf39-4698-ace5-c7d2755306c0"). InnerVolumeSpecName "kube-api-access-gcvqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.091762 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b37536e2-bf39-4698-ace5-c7d2755306c0" (UID: "b37536e2-bf39-4698-ace5-c7d2755306c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.102862 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b37536e2-bf39-4698-ace5-c7d2755306c0" (UID: "b37536e2-bf39-4698-ace5-c7d2755306c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.105836 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-config" (OuterVolumeSpecName: "config") pod "b37536e2-bf39-4698-ace5-c7d2755306c0" (UID: "b37536e2-bf39-4698-ace5-c7d2755306c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.108523 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b37536e2-bf39-4698-ace5-c7d2755306c0" (UID: "b37536e2-bf39-4698-ace5-c7d2755306c0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.109891 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b37536e2-bf39-4698-ace5-c7d2755306c0" (UID: "b37536e2-bf39-4698-ace5-c7d2755306c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.144176 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.144207 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.144216 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcvqj\" (UniqueName: \"kubernetes.io/projected/b37536e2-bf39-4698-ace5-c7d2755306c0-kube-api-access-gcvqj\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.144228 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.144236 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.144244 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37536e2-bf39-4698-ace5-c7d2755306c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.316907 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-jpkmx"] Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.324923 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-jpkmx"] Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.994927 5065 generic.go:334] "Generic (PLEG): container finished" podID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerID="8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40" exitCode=0 Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.995277 5065 generic.go:334] "Generic (PLEG): container finished" podID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerID="8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa" exitCode=2 Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.995291 5065 generic.go:334] "Generic (PLEG): container finished" podID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerID="276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d" exitCode=0 Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.995017 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerDied","Data":"8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40"} Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.995422 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerDied","Data":"8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa"} Oct 08 13:37:15 crc kubenswrapper[5065]: I1008 13:37:15.995448 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerDied","Data":"276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d"} Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.401049 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.463557 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-db-sync-config-data\") pod \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.463695 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb72f\" (UniqueName: \"kubernetes.io/projected/70fd4e43-69f4-482f-a374-2b8074e6a1d7-kube-api-access-tb72f\") pod \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.464543 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-combined-ca-bundle\") pod \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\" (UID: \"70fd4e43-69f4-482f-a374-2b8074e6a1d7\") " Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.468114 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fd4e43-69f4-482f-a374-2b8074e6a1d7-kube-api-access-tb72f" (OuterVolumeSpecName: "kube-api-access-tb72f") pod "70fd4e43-69f4-482f-a374-2b8074e6a1d7" (UID: "70fd4e43-69f4-482f-a374-2b8074e6a1d7"). InnerVolumeSpecName "kube-api-access-tb72f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.468907 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "70fd4e43-69f4-482f-a374-2b8074e6a1d7" (UID: "70fd4e43-69f4-482f-a374-2b8074e6a1d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.507665 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70fd4e43-69f4-482f-a374-2b8074e6a1d7" (UID: "70fd4e43-69f4-482f-a374-2b8074e6a1d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.566628 5065 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.566662 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb72f\" (UniqueName: \"kubernetes.io/projected/70fd4e43-69f4-482f-a374-2b8074e6a1d7-kube-api-access-tb72f\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.566675 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fd4e43-69f4-482f-a374-2b8074e6a1d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:16 crc kubenswrapper[5065]: I1008 13:37:16.882919 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37536e2-bf39-4698-ace5-c7d2755306c0" path="/var/lib/kubelet/pods/b37536e2-bf39-4698-ace5-c7d2755306c0/volumes" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.006121 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d7qm9" event={"ID":"70fd4e43-69f4-482f-a374-2b8074e6a1d7","Type":"ContainerDied","Data":"18f21067bc9465852b3d26fd1af5c0fda2274042004002886c07170525d38af5"} Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.006179 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f21067bc9465852b3d26fd1af5c0fda2274042004002886c07170525d38af5" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.006255 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d7qm9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.009933 5065 generic.go:334] "Generic (PLEG): container finished" podID="c5a257f6-4b74-429b-9da0-b76051265822" containerID="50463d675c646987c36ceb2f2ed3b6f11964f4129f88e7663cf30f72d3c799c5" exitCode=0 Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.010013 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jhs7h" event={"ID":"c5a257f6-4b74-429b-9da0-b76051265822","Type":"ContainerDied","Data":"50463d675c646987c36ceb2f2ed3b6f11964f4129f88e7663cf30f72d3c799c5"} Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.235930 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5d9966bcdf-t8xzk"] Oct 08 13:37:17 crc kubenswrapper[5065]: E1008 13:37:17.236297 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fd4e43-69f4-482f-a374-2b8074e6a1d7" containerName="barbican-db-sync" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.236313 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fd4e43-69f4-482f-a374-2b8074e6a1d7" containerName="barbican-db-sync" Oct 08 13:37:17 crc kubenswrapper[5065]: E1008 13:37:17.236331 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerName="init" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.236338 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerName="init" Oct 08 13:37:17 crc kubenswrapper[5065]: E1008 13:37:17.236351 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerName="dnsmasq-dns" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.236359 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerName="dnsmasq-dns" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.236605 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37536e2-bf39-4698-ace5-c7d2755306c0" containerName="dnsmasq-dns" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.236623 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fd4e43-69f4-482f-a374-2b8074e6a1d7" containerName="barbican-db-sync" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.237477 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.239716 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.240114 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vjnn6" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.240222 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.272734 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b79866b6-r6s8q"] Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.276375 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.284767 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.287646 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d9966bcdf-t8xzk"] Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.310237 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b79866b6-r6s8q"] Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.354671 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-bcg5s"] Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.356640 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.376685 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-bcg5s"] Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383521 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383578 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576d7\" (UniqueName: \"kubernetes.io/projected/f580765e-50e7-42a1-a798-325b80e29e9d-kube-api-access-576d7\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383604 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-combined-ca-bundle\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383637 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7qz\" (UniqueName: \"kubernetes.io/projected/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-kube-api-access-jj7qz\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383667 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f580765e-50e7-42a1-a798-325b80e29e9d-logs\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383691 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data-custom\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383751 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data-custom\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383778 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-logs\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383792 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-combined-ca-bundle\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.383811 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.477300 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-595455d844-89td9"] Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.478662 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.481190 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.485812 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-576d7\" (UniqueName: \"kubernetes.io/projected/f580765e-50e7-42a1-a798-325b80e29e9d-kube-api-access-576d7\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.485859 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-combined-ca-bundle\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.485880 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.485918 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7qz\" (UniqueName: \"kubernetes.io/projected/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-kube-api-access-jj7qz\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.485947 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-svc\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.485964 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f580765e-50e7-42a1-a798-325b80e29e9d-logs\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.485991 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data-custom\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486013 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486050 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2958\" (UniqueName: \"kubernetes.io/projected/c69efee8-33fd-4bca-9b7f-bbeedb840d83-kube-api-access-m2958\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486080 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data-custom\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486106 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-logs\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486126 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-combined-ca-bundle\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486147 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486165 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-config\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486188 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.486209 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.487548 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f580765e-50e7-42a1-a798-325b80e29e9d-logs\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.489372 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-logs\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.492064 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595455d844-89td9"] Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.499913 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.501958 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data-custom\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.503837 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data-custom\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.504631 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-combined-ca-bundle\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.504713 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.508512 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-combined-ca-bundle\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.522751 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7qz\" (UniqueName: \"kubernetes.io/projected/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-kube-api-access-jj7qz\") pod \"barbican-worker-5d9966bcdf-t8xzk\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.527122 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-576d7\" (UniqueName: \"kubernetes.io/projected/f580765e-50e7-42a1-a798-325b80e29e9d-kube-api-access-576d7\") pod \"barbican-keystone-listener-7b79866b6-r6s8q\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.567224 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.587903 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-svc\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.587957 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-combined-ca-bundle\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.587992 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588011 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data-custom\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588046 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2958\" (UniqueName: \"kubernetes.io/projected/c69efee8-33fd-4bca-9b7f-bbeedb840d83-kube-api-access-m2958\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588071 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gcqn\" (UniqueName: \"kubernetes.io/projected/c06929d7-4d71-4a93-9979-e332cf99b06d-kube-api-access-8gcqn\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588104 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588125 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-config\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588144 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588166 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06929d7-4d71-4a93-9979-e332cf99b06d-logs\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.588197 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.589603 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-svc\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.589652 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.589925 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.590240 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-config\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.592012 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.605511 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2958\" (UniqueName: \"kubernetes.io/projected/c69efee8-33fd-4bca-9b7f-bbeedb840d83-kube-api-access-m2958\") pod \"dnsmasq-dns-5c78787df7-bcg5s\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.689684 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-combined-ca-bundle\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.689749 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data-custom\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.689811 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gcqn\" (UniqueName: \"kubernetes.io/projected/c06929d7-4d71-4a93-9979-e332cf99b06d-kube-api-access-8gcqn\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.689864 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.689911 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06929d7-4d71-4a93-9979-e332cf99b06d-logs\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.690448 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06929d7-4d71-4a93-9979-e332cf99b06d-logs\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.695037 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-combined-ca-bundle\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.695324 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.697154 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data-custom\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.712960 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.713311 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gcqn\" (UniqueName: \"kubernetes.io/projected/c06929d7-4d71-4a93-9979-e332cf99b06d-kube-api-access-8gcqn\") pod \"barbican-api-595455d844-89td9\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.722885 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.743243 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.892639 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-combined-ca-bundle\") pod \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.893003 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-config-data\") pod \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.893058 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-run-httpd\") pod \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.893075 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-scripts\") pod \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.893157 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-log-httpd\") pod \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.893201 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brwns\" (UniqueName: \"kubernetes.io/projected/8849d3af-fdf6-4ec0-a66f-58da38c924f5-kube-api-access-brwns\") pod \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.893239 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-sg-core-conf-yaml\") pod \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\" (UID: \"8849d3af-fdf6-4ec0-a66f-58da38c924f5\") " Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.896218 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8849d3af-fdf6-4ec0-a66f-58da38c924f5" (UID: "8849d3af-fdf6-4ec0-a66f-58da38c924f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.896345 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8849d3af-fdf6-4ec0-a66f-58da38c924f5" (UID: "8849d3af-fdf6-4ec0-a66f-58da38c924f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.899963 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.906396 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-scripts" (OuterVolumeSpecName: "scripts") pod "8849d3af-fdf6-4ec0-a66f-58da38c924f5" (UID: "8849d3af-fdf6-4ec0-a66f-58da38c924f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.906563 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8849d3af-fdf6-4ec0-a66f-58da38c924f5-kube-api-access-brwns" (OuterVolumeSpecName: "kube-api-access-brwns") pod "8849d3af-fdf6-4ec0-a66f-58da38c924f5" (UID: "8849d3af-fdf6-4ec0-a66f-58da38c924f5"). InnerVolumeSpecName "kube-api-access-brwns". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:17 crc kubenswrapper[5065]: I1008 13:37:17.926806 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8849d3af-fdf6-4ec0-a66f-58da38c924f5" (UID: "8849d3af-fdf6-4ec0-a66f-58da38c924f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.002280 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.002324 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.002338 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8849d3af-fdf6-4ec0-a66f-58da38c924f5-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.002351 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brwns\" (UniqueName: \"kubernetes.io/projected/8849d3af-fdf6-4ec0-a66f-58da38c924f5-kube-api-access-brwns\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.002364 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.007818 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8849d3af-fdf6-4ec0-a66f-58da38c924f5" (UID: "8849d3af-fdf6-4ec0-a66f-58da38c924f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.013477 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-config-data" (OuterVolumeSpecName: "config-data") pod "8849d3af-fdf6-4ec0-a66f-58da38c924f5" (UID: "8849d3af-fdf6-4ec0-a66f-58da38c924f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.042296 5065 generic.go:334] "Generic (PLEG): container finished" podID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerID="734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c" exitCode=0 Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.042534 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.042603 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerDied","Data":"734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c"} Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.042633 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8849d3af-fdf6-4ec0-a66f-58da38c924f5","Type":"ContainerDied","Data":"cb8351155cf1aa8c2c261515cb436b4b3256edd710555615b66d6d012fe71e0b"} Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.042649 5065 scope.go:117] "RemoveContainer" containerID="8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.083062 5065 scope.go:117] "RemoveContainer" containerID="8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.084204 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.101402 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.103991 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.104027 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8849d3af-fdf6-4ec0-a66f-58da38c924f5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.117758 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.118166 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="sg-core" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118181 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="sg-core" Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.118194 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="proxy-httpd" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118200 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="proxy-httpd" Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.118217 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-central-agent" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118223 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-central-agent" Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.118233 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-notification-agent" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118239 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-notification-agent" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118400 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-notification-agent" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118432 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="proxy-httpd" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118447 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="ceilometer-central-agent" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.118462 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" containerName="sg-core" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.120141 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.126875 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.128370 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.128598 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.135802 5065 scope.go:117] "RemoveContainer" containerID="734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.166829 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d9966bcdf-t8xzk"] Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.167792 5065 scope.go:117] "RemoveContainer" containerID="276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d" Oct 08 13:37:18 crc kubenswrapper[5065]: W1008 13:37:18.185276 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a5e8a94_d14f_4b2e_9a5f_a09c9f4e0cac.slice/crio-6d978fa991d308dad178f38857a44d6d55145ebc47bb02fa18ad0223b855976f WatchSource:0}: Error finding container 6d978fa991d308dad178f38857a44d6d55145ebc47bb02fa18ad0223b855976f: Status 404 returned error can't find the container with id 6d978fa991d308dad178f38857a44d6d55145ebc47bb02fa18ad0223b855976f Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.201063 5065 scope.go:117] "RemoveContainer" containerID="8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40" Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.201702 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40\": container with ID starting with 8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40 not found: ID does not exist" containerID="8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.201762 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40"} err="failed to get container status \"8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40\": rpc error: code = NotFound desc = could not find container \"8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40\": container with ID starting with 8e1c1b058eaa10b594bc46cb00669b7e40667539b7c78cc3fd9e34c9ed17df40 not found: ID does not exist" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.201793 5065 scope.go:117] "RemoveContainer" containerID="8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa" Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.202025 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa\": container with ID starting with 8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa not found: ID does not exist" containerID="8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.202066 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa"} err="failed to get container status \"8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa\": rpc error: code = NotFound desc = could not find container \"8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa\": container with ID starting with 8bf239b4351487f70f99947f43eb3cda72cd424149f32b9c6432544672391cfa not found: ID does not exist" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.202085 5065 scope.go:117] "RemoveContainer" containerID="734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c" Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.202270 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c\": container with ID starting with 734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c not found: ID does not exist" containerID="734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.202314 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c"} err="failed to get container status \"734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c\": rpc error: code = NotFound desc = could not find container \"734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c\": container with ID starting with 734a2592bd428d54543ea3d6accad5b71e8559c67017752447ff898572783b6c not found: ID does not exist" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.202333 5065 scope.go:117] "RemoveContainer" containerID="276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d" Oct 08 13:37:18 crc kubenswrapper[5065]: E1008 13:37:18.203105 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d\": container with ID starting with 276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d not found: ID does not exist" containerID="276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.203143 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d"} err="failed to get container status \"276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d\": rpc error: code = NotFound desc = could not find container \"276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d\": container with ID starting with 276a073bbe59b185d58c26f2d4fcf6d9244997e7c226c873d81a0728a73a9e5d not found: ID does not exist" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.207883 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-scripts\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.208262 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-log-httpd\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.208339 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-run-httpd\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.208372 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8c9\" (UniqueName: \"kubernetes.io/projected/a25d389f-4c59-4f3f-b110-291380171975-kube-api-access-gv8c9\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.208552 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-config-data\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.208633 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.208711 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.273250 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-bcg5s"] Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.283241 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b79866b6-r6s8q"] Oct 08 13:37:18 crc kubenswrapper[5065]: W1008 13:37:18.294246 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc69efee8_33fd_4bca_9b7f_bbeedb840d83.slice/crio-cc7d42cdcc3261e9bf0957d377dd814ea67ac8d452829bcb0f616989d6713491 WatchSource:0}: Error finding container cc7d42cdcc3261e9bf0957d377dd814ea67ac8d452829bcb0f616989d6713491: Status 404 returned error can't find the container with id cc7d42cdcc3261e9bf0957d377dd814ea67ac8d452829bcb0f616989d6713491 Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.310589 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-config-data\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.310654 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.310698 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.310721 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-scripts\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.310783 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-log-httpd\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.310811 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-run-httpd\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.310830 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8c9\" (UniqueName: \"kubernetes.io/projected/a25d389f-4c59-4f3f-b110-291380171975-kube-api-access-gv8c9\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.311633 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-log-httpd\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.311988 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-run-httpd\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.317854 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.317953 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-config-data\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.328564 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-scripts\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.330325 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8c9\" (UniqueName: \"kubernetes.io/projected/a25d389f-4c59-4f3f-b110-291380171975-kube-api-access-gv8c9\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.332825 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.453406 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.548177 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.579147 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595455d844-89td9"] Oct 08 13:37:18 crc kubenswrapper[5065]: W1008 13:37:18.595533 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc06929d7_4d71_4a93_9979_e332cf99b06d.slice/crio-67fa0198f174e476066f25af2e6e67bf69c1adc6a666869e9fcb30770dba46ad WatchSource:0}: Error finding container 67fa0198f174e476066f25af2e6e67bf69c1adc6a666869e9fcb30770dba46ad: Status 404 returned error can't find the container with id 67fa0198f174e476066f25af2e6e67bf69c1adc6a666869e9fcb30770dba46ad Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.616321 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-db-sync-config-data\") pod \"c5a257f6-4b74-429b-9da0-b76051265822\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.616390 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a257f6-4b74-429b-9da0-b76051265822-etc-machine-id\") pod \"c5a257f6-4b74-429b-9da0-b76051265822\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.616489 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mtbz\" (UniqueName: \"kubernetes.io/projected/c5a257f6-4b74-429b-9da0-b76051265822-kube-api-access-7mtbz\") pod \"c5a257f6-4b74-429b-9da0-b76051265822\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.616577 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-config-data\") pod \"c5a257f6-4b74-429b-9da0-b76051265822\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.616609 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a257f6-4b74-429b-9da0-b76051265822-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c5a257f6-4b74-429b-9da0-b76051265822" (UID: "c5a257f6-4b74-429b-9da0-b76051265822"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.616665 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-combined-ca-bundle\") pod \"c5a257f6-4b74-429b-9da0-b76051265822\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.616704 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-scripts\") pod \"c5a257f6-4b74-429b-9da0-b76051265822\" (UID: \"c5a257f6-4b74-429b-9da0-b76051265822\") " Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.617159 5065 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a257f6-4b74-429b-9da0-b76051265822-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.636708 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-scripts" (OuterVolumeSpecName: "scripts") pod "c5a257f6-4b74-429b-9da0-b76051265822" (UID: "c5a257f6-4b74-429b-9da0-b76051265822"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.637145 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a257f6-4b74-429b-9da0-b76051265822-kube-api-access-7mtbz" (OuterVolumeSpecName: "kube-api-access-7mtbz") pod "c5a257f6-4b74-429b-9da0-b76051265822" (UID: "c5a257f6-4b74-429b-9da0-b76051265822"). InnerVolumeSpecName "kube-api-access-7mtbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.637230 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c5a257f6-4b74-429b-9da0-b76051265822" (UID: "c5a257f6-4b74-429b-9da0-b76051265822"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.671742 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5a257f6-4b74-429b-9da0-b76051265822" (UID: "c5a257f6-4b74-429b-9da0-b76051265822"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.691077 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-config-data" (OuterVolumeSpecName: "config-data") pod "c5a257f6-4b74-429b-9da0-b76051265822" (UID: "c5a257f6-4b74-429b-9da0-b76051265822"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.720100 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.720186 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.720205 5065 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.720217 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mtbz\" (UniqueName: \"kubernetes.io/projected/c5a257f6-4b74-429b-9da0-b76051265822-kube-api-access-7mtbz\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.720234 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a257f6-4b74-429b-9da0-b76051265822-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.899637 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8849d3af-fdf6-4ec0-a66f-58da38c924f5" path="/var/lib/kubelet/pods/8849d3af-fdf6-4ec0-a66f-58da38c924f5/volumes" Oct 08 13:37:18 crc kubenswrapper[5065]: I1008 13:37:18.991663 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.072380 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" event={"ID":"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac","Type":"ContainerStarted","Data":"6d978fa991d308dad178f38857a44d6d55145ebc47bb02fa18ad0223b855976f"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.080940 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595455d844-89td9" event={"ID":"c06929d7-4d71-4a93-9979-e332cf99b06d","Type":"ContainerStarted","Data":"a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.080998 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595455d844-89td9" event={"ID":"c06929d7-4d71-4a93-9979-e332cf99b06d","Type":"ContainerStarted","Data":"959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.081020 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595455d844-89td9" event={"ID":"c06929d7-4d71-4a93-9979-e332cf99b06d","Type":"ContainerStarted","Data":"67fa0198f174e476066f25af2e6e67bf69c1adc6a666869e9fcb30770dba46ad"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.081058 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.081465 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.084282 5065 generic.go:334] "Generic (PLEG): container finished" podID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerID="669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b" exitCode=0 Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.084366 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" event={"ID":"c69efee8-33fd-4bca-9b7f-bbeedb840d83","Type":"ContainerDied","Data":"669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.084386 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" event={"ID":"c69efee8-33fd-4bca-9b7f-bbeedb840d83","Type":"ContainerStarted","Data":"cc7d42cdcc3261e9bf0957d377dd814ea67ac8d452829bcb0f616989d6713491"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.092029 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jhs7h" event={"ID":"c5a257f6-4b74-429b-9da0-b76051265822","Type":"ContainerDied","Data":"a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.092076 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.092152 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jhs7h" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.096869 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerStarted","Data":"9ecaf46547de7d9fe2b426a92e9ba185fa29f9909aee309af50f7e4dd6de572b"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.097906 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" event={"ID":"f580765e-50e7-42a1-a798-325b80e29e9d","Type":"ContainerStarted","Data":"6f88963c538b8156c020550621c70a739f51ae1417b73abe0a7e07241bb9e91b"} Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.109172 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-595455d844-89td9" podStartSLOduration=2.109145536 podStartE2EDuration="2.109145536s" podCreationTimestamp="2025-10-08 13:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:19.101071963 +0000 UTC m=+1140.878453720" watchObservedRunningTime="2025-10-08 13:37:19.109145536 +0000 UTC m=+1140.886527293" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.347367 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:19 crc kubenswrapper[5065]: E1008 13:37:19.349474 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a257f6-4b74-429b-9da0-b76051265822" containerName="cinder-db-sync" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.349490 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a257f6-4b74-429b-9da0-b76051265822" containerName="cinder-db-sync" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.349663 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a257f6-4b74-429b-9da0-b76051265822" containerName="cinder-db-sync" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.350616 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.357790 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.357938 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-px6gq" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.358140 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.357952 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.370522 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.392631 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-bcg5s"] Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.433693 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-scripts\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.433761 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8kcc\" (UniqueName: \"kubernetes.io/projected/efa17eff-40f4-46d8-b24b-b9e46d50c159-kube-api-access-g8kcc\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.433802 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.433920 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.433966 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.433998 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efa17eff-40f4-46d8-b24b-b9e46d50c159-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.492471 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-tnxx4"] Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.494021 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.508098 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-tnxx4"] Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.541335 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.541584 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.541627 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.544147 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-config\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.544426 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.544544 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.544629 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdvr\" (UniqueName: \"kubernetes.io/projected/69902974-784e-48c1-af4a-49758dcbaa6f-kube-api-access-gmdvr\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.544710 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efa17eff-40f4-46d8-b24b-b9e46d50c159-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.544933 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.545133 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-svc\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.545252 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-scripts\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.545337 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8kcc\" (UniqueName: \"kubernetes.io/projected/efa17eff-40f4-46d8-b24b-b9e46d50c159-kube-api-access-g8kcc\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.546168 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efa17eff-40f4-46d8-b24b-b9e46d50c159-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.549244 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.550357 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.551660 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.559651 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-scripts\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.590084 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8kcc\" (UniqueName: \"kubernetes.io/projected/efa17eff-40f4-46d8-b24b-b9e46d50c159-kube-api-access-g8kcc\") pod \"cinder-scheduler-0\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.630429 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.632197 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.637768 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.643065 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.655106 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdvr\" (UniqueName: \"kubernetes.io/projected/69902974-784e-48c1-af4a-49758dcbaa6f-kube-api-access-gmdvr\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.655176 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.655216 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-svc\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.655272 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.655300 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.655362 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-config\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.656361 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-config\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.657276 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.659770 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.660712 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.673762 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-svc\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.677436 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdvr\" (UniqueName: \"kubernetes.io/projected/69902974-784e-48c1-af4a-49758dcbaa6f-kube-api-access-gmdvr\") pod \"dnsmasq-dns-84bd785c49-tnxx4\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.756821 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.756869 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c5332ca-a578-4d27-bc93-e851cb32f22c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.756901 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c5332ca-a578-4d27-bc93-e851cb32f22c-logs\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.756962 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-scripts\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.756985 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4gt9\" (UniqueName: \"kubernetes.io/projected/0c5332ca-a578-4d27-bc93-e851cb32f22c-kube-api-access-b4gt9\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.757014 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.757087 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.781891 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.850283 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859407 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859605 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859630 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c5332ca-a578-4d27-bc93-e851cb32f22c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859648 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c5332ca-a578-4d27-bc93-e851cb32f22c-logs\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859698 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-scripts\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859718 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4gt9\" (UniqueName: \"kubernetes.io/projected/0c5332ca-a578-4d27-bc93-e851cb32f22c-kube-api-access-b4gt9\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859751 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.859753 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c5332ca-a578-4d27-bc93-e851cb32f22c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.860604 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c5332ca-a578-4d27-bc93-e851cb32f22c-logs\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.864528 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data-custom\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.864692 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-scripts\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.865050 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.870551 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:19 crc kubenswrapper[5065]: I1008 13:37:19.879261 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4gt9\" (UniqueName: \"kubernetes.io/projected/0c5332ca-a578-4d27-bc93-e851cb32f22c-kube-api-access-b4gt9\") pod \"cinder-api-0\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " pod="openstack/cinder-api-0" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.023566 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.113227 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" event={"ID":"c69efee8-33fd-4bca-9b7f-bbeedb840d83","Type":"ContainerStarted","Data":"31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08"} Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.113314 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" podUID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerName="dnsmasq-dns" containerID="cri-o://31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08" gracePeriod=10 Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.113400 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.114891 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerStarted","Data":"a0b594d00e9d4c672d3568771c0c3ab22b43dcf04a37db230faa87c980a18a20"} Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.137857 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" podStartSLOduration=3.137839795 podStartE2EDuration="3.137839795s" podCreationTimestamp="2025-10-08 13:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:20.13476783 +0000 UTC m=+1141.912149597" watchObservedRunningTime="2025-10-08 13:37:20.137839795 +0000 UTC m=+1141.915221552" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.705828 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.776094 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-swift-storage-0\") pod \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.776623 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-config\") pod \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.776725 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-nb\") pod \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.776822 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2958\" (UniqueName: \"kubernetes.io/projected/c69efee8-33fd-4bca-9b7f-bbeedb840d83-kube-api-access-m2958\") pod \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.776946 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-svc\") pod \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.776967 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-sb\") pod \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\" (UID: \"c69efee8-33fd-4bca-9b7f-bbeedb840d83\") " Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.798618 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69efee8-33fd-4bca-9b7f-bbeedb840d83-kube-api-access-m2958" (OuterVolumeSpecName: "kube-api-access-m2958") pod "c69efee8-33fd-4bca-9b7f-bbeedb840d83" (UID: "c69efee8-33fd-4bca-9b7f-bbeedb840d83"). InnerVolumeSpecName "kube-api-access-m2958". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.864583 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-config" (OuterVolumeSpecName: "config") pod "c69efee8-33fd-4bca-9b7f-bbeedb840d83" (UID: "c69efee8-33fd-4bca-9b7f-bbeedb840d83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.871117 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c69efee8-33fd-4bca-9b7f-bbeedb840d83" (UID: "c69efee8-33fd-4bca-9b7f-bbeedb840d83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.879503 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2958\" (UniqueName: \"kubernetes.io/projected/c69efee8-33fd-4bca-9b7f-bbeedb840d83-kube-api-access-m2958\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.879525 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.879533 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.900381 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c69efee8-33fd-4bca-9b7f-bbeedb840d83" (UID: "c69efee8-33fd-4bca-9b7f-bbeedb840d83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.906509 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c69efee8-33fd-4bca-9b7f-bbeedb840d83" (UID: "c69efee8-33fd-4bca-9b7f-bbeedb840d83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.933167 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c69efee8-33fd-4bca-9b7f-bbeedb840d83" (UID: "c69efee8-33fd-4bca-9b7f-bbeedb840d83"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.962041 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.988022 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.988062 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.988072 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c69efee8-33fd-4bca-9b7f-bbeedb840d83-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:20 crc kubenswrapper[5065]: I1008 13:37:20.997354 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.134107 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" event={"ID":"f580765e-50e7-42a1-a798-325b80e29e9d","Type":"ContainerStarted","Data":"33ceec7bd32526aadb9fdccd627ea9d6843d1eeca2d61fca38b266e771abc801"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.134180 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" event={"ID":"f580765e-50e7-42a1-a798-325b80e29e9d","Type":"ContainerStarted","Data":"e1798572ea57cfc0ec49a70e753277538e330c3d721fda3351da9d563af5e085"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.161280 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" event={"ID":"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac","Type":"ContainerStarted","Data":"53a13a8cc7470ca3fcf00e1a1762fd3a353cd97e981efeec2278512dabac8016"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.161321 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" event={"ID":"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac","Type":"ContainerStarted","Data":"59a606638a8f15bd4f91168bdf8deccda702a925eb4c86f515c45335f2c32e92"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.162569 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efa17eff-40f4-46d8-b24b-b9e46d50c159","Type":"ContainerStarted","Data":"40f1b63fa99da2cf415669809425ffc1b4c9d38d8dac4ad810d6aee83d4c586d"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.166932 5065 generic.go:334] "Generic (PLEG): container finished" podID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerID="31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08" exitCode=0 Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.166983 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" event={"ID":"c69efee8-33fd-4bca-9b7f-bbeedb840d83","Type":"ContainerDied","Data":"31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.167007 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" event={"ID":"c69efee8-33fd-4bca-9b7f-bbeedb840d83","Type":"ContainerDied","Data":"cc7d42cdcc3261e9bf0957d377dd814ea67ac8d452829bcb0f616989d6713491"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.167022 5065 scope.go:117] "RemoveContainer" containerID="31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.167117 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-bcg5s" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.170577 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-tnxx4"] Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.174930 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" podStartSLOduration=1.9963797890000001 podStartE2EDuration="4.174893633s" podCreationTimestamp="2025-10-08 13:37:17 +0000 UTC" firstStartedPulling="2025-10-08 13:37:18.300967222 +0000 UTC m=+1140.078348979" lastFinishedPulling="2025-10-08 13:37:20.479481066 +0000 UTC m=+1142.256862823" observedRunningTime="2025-10-08 13:37:21.156442775 +0000 UTC m=+1142.933824542" watchObservedRunningTime="2025-10-08 13:37:21.174893633 +0000 UTC m=+1142.952275380" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.177267 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c5332ca-a578-4d27-bc93-e851cb32f22c","Type":"ContainerStarted","Data":"be81562b6832689e4abc7a75fd726a2ae89ccc60bbfe38a914eda8853050b686"} Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.182775 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerStarted","Data":"30ef47fe2a73dbe0b6069d7be23b595d7da2da852e42010cdd476e54bf37cdd0"} Oct 08 13:37:21 crc kubenswrapper[5065]: W1008 13:37:21.183741 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69902974_784e_48c1_af4a_49758dcbaa6f.slice/crio-98a8d351fe04cf9e7457a227dfff51391f1858324d777302ce4065de313f01ed WatchSource:0}: Error finding container 98a8d351fe04cf9e7457a227dfff51391f1858324d777302ce4065de313f01ed: Status 404 returned error can't find the container with id 98a8d351fe04cf9e7457a227dfff51391f1858324d777302ce4065de313f01ed Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.203803 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" podStartSLOduration=1.929561648 podStartE2EDuration="4.203786339s" podCreationTimestamp="2025-10-08 13:37:17 +0000 UTC" firstStartedPulling="2025-10-08 13:37:18.199488986 +0000 UTC m=+1139.976870743" lastFinishedPulling="2025-10-08 13:37:20.473713677 +0000 UTC m=+1142.251095434" observedRunningTime="2025-10-08 13:37:21.186266207 +0000 UTC m=+1142.963647964" watchObservedRunningTime="2025-10-08 13:37:21.203786339 +0000 UTC m=+1142.981168096" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.222614 5065 scope.go:117] "RemoveContainer" containerID="669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.231733 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-bcg5s"] Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.248896 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-bcg5s"] Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.288346 5065 scope.go:117] "RemoveContainer" containerID="31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08" Oct 08 13:37:21 crc kubenswrapper[5065]: E1008 13:37:21.294685 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08\": container with ID starting with 31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08 not found: ID does not exist" containerID="31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.294749 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08"} err="failed to get container status \"31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08\": rpc error: code = NotFound desc = could not find container \"31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08\": container with ID starting with 31d06b1179740a088e61e0eb2077be5374ba5e14140ed69f24e6640397685f08 not found: ID does not exist" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.294784 5065 scope.go:117] "RemoveContainer" containerID="669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b" Oct 08 13:37:21 crc kubenswrapper[5065]: E1008 13:37:21.296137 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b\": container with ID starting with 669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b not found: ID does not exist" containerID="669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b" Oct 08 13:37:21 crc kubenswrapper[5065]: I1008 13:37:21.296194 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b"} err="failed to get container status \"669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b\": rpc error: code = NotFound desc = could not find container \"669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b\": container with ID starting with 669ae70e6d534ef5f7bfd827a449839639472aff3103196cf2bfe9ca5c9a434b not found: ID does not exist" Oct 08 13:37:22 crc kubenswrapper[5065]: I1008 13:37:22.193737 5065 generic.go:334] "Generic (PLEG): container finished" podID="69902974-784e-48c1-af4a-49758dcbaa6f" containerID="d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b" exitCode=0 Oct 08 13:37:22 crc kubenswrapper[5065]: I1008 13:37:22.193963 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" event={"ID":"69902974-784e-48c1-af4a-49758dcbaa6f","Type":"ContainerDied","Data":"d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b"} Oct 08 13:37:22 crc kubenswrapper[5065]: I1008 13:37:22.194344 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" event={"ID":"69902974-784e-48c1-af4a-49758dcbaa6f","Type":"ContainerStarted","Data":"98a8d351fe04cf9e7457a227dfff51391f1858324d777302ce4065de313f01ed"} Oct 08 13:37:22 crc kubenswrapper[5065]: I1008 13:37:22.202597 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c5332ca-a578-4d27-bc93-e851cb32f22c","Type":"ContainerStarted","Data":"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933"} Oct 08 13:37:22 crc kubenswrapper[5065]: I1008 13:37:22.216372 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerStarted","Data":"a3c731a6cafa20db04649c07e1b1f808507b0a7eb52d39a8a6a3ade2c0fe09de"} Oct 08 13:37:22 crc kubenswrapper[5065]: I1008 13:37:22.884298 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" path="/var/lib/kubelet/pods/c69efee8-33fd-4bca-9b7f-bbeedb840d83/volumes" Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.234706 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" event={"ID":"69902974-784e-48c1-af4a-49758dcbaa6f","Type":"ContainerStarted","Data":"e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04"} Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.235111 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.238735 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efa17eff-40f4-46d8-b24b-b9e46d50c159","Type":"ContainerStarted","Data":"1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4"} Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.240700 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c5332ca-a578-4d27-bc93-e851cb32f22c","Type":"ContainerStarted","Data":"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f"} Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.241342 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.260661 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" podStartSLOduration=4.260643973 podStartE2EDuration="4.260643973s" podCreationTimestamp="2025-10-08 13:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:23.250714309 +0000 UTC m=+1145.028096096" watchObservedRunningTime="2025-10-08 13:37:23.260643973 +0000 UTC m=+1145.038025730" Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.272440 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.272402617 podStartE2EDuration="4.272402617s" podCreationTimestamp="2025-10-08 13:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:23.266553885 +0000 UTC m=+1145.043935672" watchObservedRunningTime="2025-10-08 13:37:23.272402617 +0000 UTC m=+1145.049784374" Oct 08 13:37:23 crc kubenswrapper[5065]: I1008 13:37:23.694862 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:24 crc kubenswrapper[5065]: E1008 13:37:24.039989 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a257f6_4b74_429b_9da0_b76051265822.slice/crio-a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158\": RecentStats: unable to find data in memory cache]" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.212130 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5dd9f968c6-s658p"] Oct 08 13:37:24 crc kubenswrapper[5065]: E1008 13:37:24.212574 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerName="init" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.212596 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerName="init" Oct 08 13:37:24 crc kubenswrapper[5065]: E1008 13:37:24.212624 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerName="dnsmasq-dns" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.212632 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerName="dnsmasq-dns" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.212876 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c69efee8-33fd-4bca-9b7f-bbeedb840d83" containerName="dnsmasq-dns" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.214149 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.224028 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.224196 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.227617 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5dd9f968c6-s658p"] Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.263089 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-combined-ca-bundle\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.263148 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.263188 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data-custom\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.263300 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-public-tls-certs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.263319 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljh6k\" (UniqueName: \"kubernetes.io/projected/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-kube-api-access-ljh6k\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.263348 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-logs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.263380 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-internal-tls-certs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.282582 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerStarted","Data":"2255625d5a288ce5c8fd77abfcc524659b9bfee90576c0aa8da45f00930a1ae8"} Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.282752 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.286301 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efa17eff-40f4-46d8-b24b-b9e46d50c159","Type":"ContainerStarted","Data":"4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3"} Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.322444 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.17217447 podStartE2EDuration="6.322423182s" podCreationTimestamp="2025-10-08 13:37:18 +0000 UTC" firstStartedPulling="2025-10-08 13:37:19.007104195 +0000 UTC m=+1140.784485952" lastFinishedPulling="2025-10-08 13:37:23.157352907 +0000 UTC m=+1144.934734664" observedRunningTime="2025-10-08 13:37:24.310340379 +0000 UTC m=+1146.087722136" watchObservedRunningTime="2025-10-08 13:37:24.322423182 +0000 UTC m=+1146.099804939" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.343430 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.294206456 podStartE2EDuration="5.34339393s" podCreationTimestamp="2025-10-08 13:37:19 +0000 UTC" firstStartedPulling="2025-10-08 13:37:20.96570567 +0000 UTC m=+1142.743087447" lastFinishedPulling="2025-10-08 13:37:22.014893164 +0000 UTC m=+1143.792274921" observedRunningTime="2025-10-08 13:37:24.33504876 +0000 UTC m=+1146.112430517" watchObservedRunningTime="2025-10-08 13:37:24.34339393 +0000 UTC m=+1146.120775687" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.365333 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-public-tls-certs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.365392 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljh6k\" (UniqueName: \"kubernetes.io/projected/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-kube-api-access-ljh6k\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.365512 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-logs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.365561 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-internal-tls-certs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.365584 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-combined-ca-bundle\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.365636 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.365707 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data-custom\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.367381 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-logs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.372768 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.373933 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-combined-ca-bundle\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.379226 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data-custom\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.379614 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-internal-tls-certs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.383579 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-public-tls-certs\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.384465 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljh6k\" (UniqueName: \"kubernetes.io/projected/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-kube-api-access-ljh6k\") pod \"barbican-api-5dd9f968c6-s658p\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.546952 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:24 crc kubenswrapper[5065]: I1008 13:37:24.782181 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.069777 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5dd9f968c6-s658p"] Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.306318 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dd9f968c6-s658p" event={"ID":"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7","Type":"ContainerStarted","Data":"282626a68ae112e2a7cb36758619cc926c9f8f75cf3cc744534bbd577bef1de1"} Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.306667 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dd9f968c6-s658p" event={"ID":"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7","Type":"ContainerStarted","Data":"6a440f6db7d4aefe96e20e5600720abbe971ccf10a98929d84641ba84ecc709d"} Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.307705 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api-log" containerID="cri-o://82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933" gracePeriod=30 Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.307773 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api" containerID="cri-o://a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f" gracePeriod=30 Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.873354 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996305 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data-custom\") pod \"0c5332ca-a578-4d27-bc93-e851cb32f22c\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996384 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c5332ca-a578-4d27-bc93-e851cb32f22c-etc-machine-id\") pod \"0c5332ca-a578-4d27-bc93-e851cb32f22c\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996403 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c5332ca-a578-4d27-bc93-e851cb32f22c-logs\") pod \"0c5332ca-a578-4d27-bc93-e851cb32f22c\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996447 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-combined-ca-bundle\") pod \"0c5332ca-a578-4d27-bc93-e851cb32f22c\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996483 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4gt9\" (UniqueName: \"kubernetes.io/projected/0c5332ca-a578-4d27-bc93-e851cb32f22c-kube-api-access-b4gt9\") pod \"0c5332ca-a578-4d27-bc93-e851cb32f22c\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996499 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5332ca-a578-4d27-bc93-e851cb32f22c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0c5332ca-a578-4d27-bc93-e851cb32f22c" (UID: "0c5332ca-a578-4d27-bc93-e851cb32f22c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996544 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-scripts\") pod \"0c5332ca-a578-4d27-bc93-e851cb32f22c\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996617 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data\") pod \"0c5332ca-a578-4d27-bc93-e851cb32f22c\" (UID: \"0c5332ca-a578-4d27-bc93-e851cb32f22c\") " Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996905 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c5332ca-a578-4d27-bc93-e851cb32f22c-logs" (OuterVolumeSpecName: "logs") pod "0c5332ca-a578-4d27-bc93-e851cb32f22c" (UID: "0c5332ca-a578-4d27-bc93-e851cb32f22c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:25 crc kubenswrapper[5065]: I1008 13:37:25.996911 5065 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c5332ca-a578-4d27-bc93-e851cb32f22c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.001339 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-scripts" (OuterVolumeSpecName: "scripts") pod "0c5332ca-a578-4d27-bc93-e851cb32f22c" (UID: "0c5332ca-a578-4d27-bc93-e851cb32f22c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.001521 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c5332ca-a578-4d27-bc93-e851cb32f22c" (UID: "0c5332ca-a578-4d27-bc93-e851cb32f22c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.001561 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5332ca-a578-4d27-bc93-e851cb32f22c-kube-api-access-b4gt9" (OuterVolumeSpecName: "kube-api-access-b4gt9") pod "0c5332ca-a578-4d27-bc93-e851cb32f22c" (UID: "0c5332ca-a578-4d27-bc93-e851cb32f22c"). InnerVolumeSpecName "kube-api-access-b4gt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.028553 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c5332ca-a578-4d27-bc93-e851cb32f22c" (UID: "0c5332ca-a578-4d27-bc93-e851cb32f22c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.065362 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data" (OuterVolumeSpecName: "config-data") pod "0c5332ca-a578-4d27-bc93-e851cb32f22c" (UID: "0c5332ca-a578-4d27-bc93-e851cb32f22c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.097873 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c5332ca-a578-4d27-bc93-e851cb32f22c-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.097907 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.097917 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4gt9\" (UniqueName: \"kubernetes.io/projected/0c5332ca-a578-4d27-bc93-e851cb32f22c-kube-api-access-b4gt9\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.097927 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.097936 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.097945 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c5332ca-a578-4d27-bc93-e851cb32f22c-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.314754 5065 generic.go:334] "Generic (PLEG): container finished" podID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerID="a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f" exitCode=0 Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.314783 5065 generic.go:334] "Generic (PLEG): container finished" podID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerID="82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933" exitCode=143 Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.314795 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c5332ca-a578-4d27-bc93-e851cb32f22c","Type":"ContainerDied","Data":"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f"} Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.314821 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.314842 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c5332ca-a578-4d27-bc93-e851cb32f22c","Type":"ContainerDied","Data":"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933"} Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.314855 5065 scope.go:117] "RemoveContainer" containerID="a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.314857 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0c5332ca-a578-4d27-bc93-e851cb32f22c","Type":"ContainerDied","Data":"be81562b6832689e4abc7a75fd726a2ae89ccc60bbfe38a914eda8853050b686"} Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.317574 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dd9f968c6-s658p" event={"ID":"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7","Type":"ContainerStarted","Data":"0e1463d6dafc9375c3fe50606be6afdbff0ecc4ff69847c5fb6972ed597e6323"} Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.317638 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.317687 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.344086 5065 scope.go:117] "RemoveContainer" containerID="82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.346920 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5dd9f968c6-s658p" podStartSLOduration=2.3469036340000002 podStartE2EDuration="2.346903634s" podCreationTimestamp="2025-10-08 13:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:26.344572599 +0000 UTC m=+1148.121954366" watchObservedRunningTime="2025-10-08 13:37:26.346903634 +0000 UTC m=+1148.124285391" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.364905 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.367150 5065 scope.go:117] "RemoveContainer" containerID="a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f" Oct 08 13:37:26 crc kubenswrapper[5065]: E1008 13:37:26.367708 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f\": container with ID starting with a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f not found: ID does not exist" containerID="a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.367751 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f"} err="failed to get container status \"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f\": rpc error: code = NotFound desc = could not find container \"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f\": container with ID starting with a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f not found: ID does not exist" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.367777 5065 scope.go:117] "RemoveContainer" containerID="82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933" Oct 08 13:37:26 crc kubenswrapper[5065]: E1008 13:37:26.369118 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933\": container with ID starting with 82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933 not found: ID does not exist" containerID="82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.369169 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933"} err="failed to get container status \"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933\": rpc error: code = NotFound desc = could not find container \"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933\": container with ID starting with 82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933 not found: ID does not exist" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.369201 5065 scope.go:117] "RemoveContainer" containerID="a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.372564 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f"} err="failed to get container status \"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f\": rpc error: code = NotFound desc = could not find container \"a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f\": container with ID starting with a16bfe9cc3d92f5ef07ff357dd5de4dc10e637d0f9ad16c856c35cfd41d6d71f not found: ID does not exist" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.372611 5065 scope.go:117] "RemoveContainer" containerID="82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.374054 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933"} err="failed to get container status \"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933\": rpc error: code = NotFound desc = could not find container \"82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933\": container with ID starting with 82d0c50aff35da2610d7b3f4e5423a41ca51ade3001adc7be04a0bd2e54de933 not found: ID does not exist" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.377196 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.388235 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:26 crc kubenswrapper[5065]: E1008 13:37:26.389049 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api-log" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.389072 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api-log" Oct 08 13:37:26 crc kubenswrapper[5065]: E1008 13:37:26.389095 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.389102 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.389290 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.389322 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" containerName="cinder-api-log" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.390507 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.406762 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.407653 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.407828 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.427180 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.517801 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.517873 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.517911 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp5cw\" (UniqueName: \"kubernetes.io/projected/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-kube-api-access-cp5cw\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.517931 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.517964 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-logs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.517984 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-scripts\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.518011 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.518209 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.518316 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619383 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619478 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619512 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619537 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp5cw\" (UniqueName: \"kubernetes.io/projected/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-kube-api-access-cp5cw\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619558 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619590 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-logs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619605 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-scripts\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619600 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.619627 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.621021 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-logs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.621123 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.624304 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.624710 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.624920 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-scripts\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.625281 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.627721 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.627943 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.638447 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp5cw\" (UniqueName: \"kubernetes.io/projected/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-kube-api-access-cp5cw\") pod \"cinder-api-0\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.763742 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:37:26 crc kubenswrapper[5065]: I1008 13:37:26.885355 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5332ca-a578-4d27-bc93-e851cb32f22c" path="/var/lib/kubelet/pods/0c5332ca-a578-4d27-bc93-e851cb32f22c/volumes" Oct 08 13:37:27 crc kubenswrapper[5065]: I1008 13:37:27.200962 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:37:27 crc kubenswrapper[5065]: W1008 13:37:27.209363 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0d5e818_6480_4dfb_b8a2_50dc4ec58dad.slice/crio-1ddaf5ed468b9f537cc64dc2d3d0360005419a11d2b346446606cf70d57d78b3 WatchSource:0}: Error finding container 1ddaf5ed468b9f537cc64dc2d3d0360005419a11d2b346446606cf70d57d78b3: Status 404 returned error can't find the container with id 1ddaf5ed468b9f537cc64dc2d3d0360005419a11d2b346446606cf70d57d78b3 Oct 08 13:37:27 crc kubenswrapper[5065]: I1008 13:37:27.333896 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad","Type":"ContainerStarted","Data":"1ddaf5ed468b9f537cc64dc2d3d0360005419a11d2b346446606cf70d57d78b3"} Oct 08 13:37:28 crc kubenswrapper[5065]: I1008 13:37:28.347796 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad","Type":"ContainerStarted","Data":"6e6bc70c0d9042c36b539f3256634e4e3089df774e755ee10132f50e60ab06f9"} Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.191651 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.193015 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.367498 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad","Type":"ContainerStarted","Data":"1bf0529c90d0ac2dd5a5b0e88e4e632725a93d83c5bbb3394596dcada46534fe"} Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.367649 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.416469 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.416451674 podStartE2EDuration="3.416451674s" podCreationTimestamp="2025-10-08 13:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:29.394375806 +0000 UTC m=+1151.171757563" watchObservedRunningTime="2025-10-08 13:37:29.416451674 +0000 UTC m=+1151.193833441" Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.432831 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.442056 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.869163 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.956487 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-lbrmn"] Oct 08 13:37:29 crc kubenswrapper[5065]: I1008 13:37:29.956833 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" podUID="e4512702-9289-4284-9325-674842084fc7" containerName="dnsmasq-dns" containerID="cri-o://d619bc60e14851813126cd50d3993edbece8e90cc2c7543c50d2dc75541139b2" gracePeriod=10 Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.130967 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.235164 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.375747 5065 generic.go:334] "Generic (PLEG): container finished" podID="e4512702-9289-4284-9325-674842084fc7" containerID="d619bc60e14851813126cd50d3993edbece8e90cc2c7543c50d2dc75541139b2" exitCode=0 Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.376756 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" event={"ID":"e4512702-9289-4284-9325-674842084fc7","Type":"ContainerDied","Data":"d619bc60e14851813126cd50d3993edbece8e90cc2c7543c50d2dc75541139b2"} Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.376897 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="cinder-scheduler" containerID="cri-o://1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4" gracePeriod=30 Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.377750 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="probe" containerID="cri-o://4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3" gracePeriod=30 Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.535156 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.627114 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-nb\") pod \"e4512702-9289-4284-9325-674842084fc7\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.627172 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-swift-storage-0\") pod \"e4512702-9289-4284-9325-674842084fc7\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.627202 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skrhk\" (UniqueName: \"kubernetes.io/projected/e4512702-9289-4284-9325-674842084fc7-kube-api-access-skrhk\") pod \"e4512702-9289-4284-9325-674842084fc7\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.627252 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-svc\") pod \"e4512702-9289-4284-9325-674842084fc7\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.627321 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-sb\") pod \"e4512702-9289-4284-9325-674842084fc7\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.627493 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-config\") pod \"e4512702-9289-4284-9325-674842084fc7\" (UID: \"e4512702-9289-4284-9325-674842084fc7\") " Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.636571 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4512702-9289-4284-9325-674842084fc7-kube-api-access-skrhk" (OuterVolumeSpecName: "kube-api-access-skrhk") pod "e4512702-9289-4284-9325-674842084fc7" (UID: "e4512702-9289-4284-9325-674842084fc7"). InnerVolumeSpecName "kube-api-access-skrhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.713028 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e4512702-9289-4284-9325-674842084fc7" (UID: "e4512702-9289-4284-9325-674842084fc7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.733104 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4512702-9289-4284-9325-674842084fc7" (UID: "e4512702-9289-4284-9325-674842084fc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.733133 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-config" (OuterVolumeSpecName: "config") pod "e4512702-9289-4284-9325-674842084fc7" (UID: "e4512702-9289-4284-9325-674842084fc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.733286 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e4512702-9289-4284-9325-674842084fc7" (UID: "e4512702-9289-4284-9325-674842084fc7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.733475 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e4512702-9289-4284-9325-674842084fc7" (UID: "e4512702-9289-4284-9325-674842084fc7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.734700 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.734730 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.734744 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.734757 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skrhk\" (UniqueName: \"kubernetes.io/projected/e4512702-9289-4284-9325-674842084fc7-kube-api-access-skrhk\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.734785 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:30 crc kubenswrapper[5065]: I1008 13:37:30.734796 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4512702-9289-4284-9325-674842084fc7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.134150 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.394639 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" event={"ID":"e4512702-9289-4284-9325-674842084fc7","Type":"ContainerDied","Data":"c7a782b5263ace5f9ffc326e539d9fa82d79b6495e300b5bfe519dafb0d3962a"} Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.394669 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-lbrmn" Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.394692 5065 scope.go:117] "RemoveContainer" containerID="d619bc60e14851813126cd50d3993edbece8e90cc2c7543c50d2dc75541139b2" Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.398978 5065 generic.go:334] "Generic (PLEG): container finished" podID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerID="4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3" exitCode=0 Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.399008 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efa17eff-40f4-46d8-b24b-b9e46d50c159","Type":"ContainerDied","Data":"4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3"} Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.423493 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-lbrmn"] Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.424617 5065 scope.go:117] "RemoveContainer" containerID="826c106c7a29a3df7e958978c73c0f16d133fc4770415801ae84ff8eb1004905" Oct 08 13:37:31 crc kubenswrapper[5065]: I1008 13:37:31.432490 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-lbrmn"] Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.816181 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 13:37:32 crc kubenswrapper[5065]: E1008 13:37:32.816618 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4512702-9289-4284-9325-674842084fc7" containerName="init" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.816634 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4512702-9289-4284-9325-674842084fc7" containerName="init" Oct 08 13:37:32 crc kubenswrapper[5065]: E1008 13:37:32.816664 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4512702-9289-4284-9325-674842084fc7" containerName="dnsmasq-dns" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.816672 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4512702-9289-4284-9325-674842084fc7" containerName="dnsmasq-dns" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.816867 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4512702-9289-4284-9325-674842084fc7" containerName="dnsmasq-dns" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.817609 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.819406 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.819929 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.820692 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qcm66" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.828687 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.901816 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4512702-9289-4284-9325-674842084fc7" path="/var/lib/kubelet/pods/e4512702-9289-4284-9325-674842084fc7/volumes" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.912897 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.913067 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config-secret\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.913240 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8wv\" (UniqueName: \"kubernetes.io/projected/217108e8-6327-4b34-bfe4-079db2c41cf2-kube-api-access-rh8wv\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:32 crc kubenswrapper[5065]: I1008 13:37:32.913528 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.015569 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.016363 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.017431 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config-secret\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.017574 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8wv\" (UniqueName: \"kubernetes.io/projected/217108e8-6327-4b34-bfe4-079db2c41cf2-kube-api-access-rh8wv\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.017752 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.022831 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config-secret\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.027055 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.033690 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8wv\" (UniqueName: \"kubernetes.io/projected/217108e8-6327-4b34-bfe4-079db2c41cf2-kube-api-access-rh8wv\") pod \"openstackclient\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.099049 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.099618 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.110399 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.194630 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.195915 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.212108 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 13:37:33 crc kubenswrapper[5065]: E1008 13:37:33.251186 5065 log.go:32] "RunPodSandbox from runtime service failed" err=< Oct 08 13:37:33 crc kubenswrapper[5065]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_217108e8-6327-4b34-bfe4-079db2c41cf2_0(cae0b810643e8533fc67825a74a1bfb338506b0a7e5a79eee5282cf0695122bf): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cae0b810643e8533fc67825a74a1bfb338506b0a7e5a79eee5282cf0695122bf" Netns:"/var/run/netns/16e6d2f9-d06f-4741-b481-85c035500e08" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=cae0b810643e8533fc67825a74a1bfb338506b0a7e5a79eee5282cf0695122bf;K8S_POD_UID=217108e8-6327-4b34-bfe4-079db2c41cf2" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/217108e8-6327-4b34-bfe4-079db2c41cf2]: expected pod UID "217108e8-6327-4b34-bfe4-079db2c41cf2" but got "2136579b-6d89-48ff-b960-a0401eb9af4c" from Kube API Oct 08 13:37:33 crc kubenswrapper[5065]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 08 13:37:33 crc kubenswrapper[5065]: > Oct 08 13:37:33 crc kubenswrapper[5065]: E1008 13:37:33.251281 5065 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Oct 08 13:37:33 crc kubenswrapper[5065]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_217108e8-6327-4b34-bfe4-079db2c41cf2_0(cae0b810643e8533fc67825a74a1bfb338506b0a7e5a79eee5282cf0695122bf): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cae0b810643e8533fc67825a74a1bfb338506b0a7e5a79eee5282cf0695122bf" Netns:"/var/run/netns/16e6d2f9-d06f-4741-b481-85c035500e08" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=cae0b810643e8533fc67825a74a1bfb338506b0a7e5a79eee5282cf0695122bf;K8S_POD_UID=217108e8-6327-4b34-bfe4-079db2c41cf2" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/217108e8-6327-4b34-bfe4-079db2c41cf2]: expected pod UID "217108e8-6327-4b34-bfe4-079db2c41cf2" but got "2136579b-6d89-48ff-b960-a0401eb9af4c" from Kube API Oct 08 13:37:33 crc kubenswrapper[5065]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 08 13:37:33 crc kubenswrapper[5065]: > pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.323222 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config-secret\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.323270 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.323301 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.323553 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jjt8\" (UniqueName: \"kubernetes.io/projected/2136579b-6d89-48ff-b960-a0401eb9af4c-kube-api-access-2jjt8\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.425648 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config-secret\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.425704 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.426907 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.426981 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.431275 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jjt8\" (UniqueName: \"kubernetes.io/projected/2136579b-6d89-48ff-b960-a0401eb9af4c-kube-api-access-2jjt8\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.431619 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config-secret\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.434942 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.442552 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.445848 5065 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="217108e8-6327-4b34-bfe4-079db2c41cf2" podUID="2136579b-6d89-48ff-b960-a0401eb9af4c" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.446873 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jjt8\" (UniqueName: \"kubernetes.io/projected/2136579b-6d89-48ff-b960-a0401eb9af4c-kube-api-access-2jjt8\") pod \"openstackclient\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.452586 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.519488 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.532920 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config-secret\") pod \"217108e8-6327-4b34-bfe4-079db2c41cf2\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.533101 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8wv\" (UniqueName: \"kubernetes.io/projected/217108e8-6327-4b34-bfe4-079db2c41cf2-kube-api-access-rh8wv\") pod \"217108e8-6327-4b34-bfe4-079db2c41cf2\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.533135 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config\") pod \"217108e8-6327-4b34-bfe4-079db2c41cf2\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.533179 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-combined-ca-bundle\") pod \"217108e8-6327-4b34-bfe4-079db2c41cf2\" (UID: \"217108e8-6327-4b34-bfe4-079db2c41cf2\") " Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.533945 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "217108e8-6327-4b34-bfe4-079db2c41cf2" (UID: "217108e8-6327-4b34-bfe4-079db2c41cf2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.537343 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "217108e8-6327-4b34-bfe4-079db2c41cf2" (UID: "217108e8-6327-4b34-bfe4-079db2c41cf2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.537461 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217108e8-6327-4b34-bfe4-079db2c41cf2-kube-api-access-rh8wv" (OuterVolumeSpecName: "kube-api-access-rh8wv") pod "217108e8-6327-4b34-bfe4-079db2c41cf2" (UID: "217108e8-6327-4b34-bfe4-079db2c41cf2"). InnerVolumeSpecName "kube-api-access-rh8wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.542192 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "217108e8-6327-4b34-bfe4-079db2c41cf2" (UID: "217108e8-6327-4b34-bfe4-079db2c41cf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.635332 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8wv\" (UniqueName: \"kubernetes.io/projected/217108e8-6327-4b34-bfe4-079db2c41cf2-kube-api-access-rh8wv\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.635674 5065 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.635688 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.635701 5065 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/217108e8-6327-4b34-bfe4-079db2c41cf2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:33 crc kubenswrapper[5065]: I1008 13:37:33.990014 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 13:37:33 crc kubenswrapper[5065]: W1008 13:37:33.991818 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2136579b_6d89_48ff_b960_a0401eb9af4c.slice/crio-1165a7ab05e73eb564b798cf3b70302559ff4c024b8c054a547516dee29972af WatchSource:0}: Error finding container 1165a7ab05e73eb564b798cf3b70302559ff4c024b8c054a547516dee29972af: Status 404 returned error can't find the container with id 1165a7ab05e73eb564b798cf3b70302559ff4c024b8c054a547516dee29972af Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.104606 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.246741 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efa17eff-40f4-46d8-b24b-b9e46d50c159-etc-machine-id\") pod \"efa17eff-40f4-46d8-b24b-b9e46d50c159\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.246802 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data-custom\") pod \"efa17eff-40f4-46d8-b24b-b9e46d50c159\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.246837 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data\") pod \"efa17eff-40f4-46d8-b24b-b9e46d50c159\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.246832 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efa17eff-40f4-46d8-b24b-b9e46d50c159-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "efa17eff-40f4-46d8-b24b-b9e46d50c159" (UID: "efa17eff-40f4-46d8-b24b-b9e46d50c159"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.246953 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8kcc\" (UniqueName: \"kubernetes.io/projected/efa17eff-40f4-46d8-b24b-b9e46d50c159-kube-api-access-g8kcc\") pod \"efa17eff-40f4-46d8-b24b-b9e46d50c159\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.246997 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-combined-ca-bundle\") pod \"efa17eff-40f4-46d8-b24b-b9e46d50c159\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.247030 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-scripts\") pod \"efa17eff-40f4-46d8-b24b-b9e46d50c159\" (UID: \"efa17eff-40f4-46d8-b24b-b9e46d50c159\") " Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.247635 5065 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efa17eff-40f4-46d8-b24b-b9e46d50c159-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.252444 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-scripts" (OuterVolumeSpecName: "scripts") pod "efa17eff-40f4-46d8-b24b-b9e46d50c159" (UID: "efa17eff-40f4-46d8-b24b-b9e46d50c159"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.258700 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa17eff-40f4-46d8-b24b-b9e46d50c159-kube-api-access-g8kcc" (OuterVolumeSpecName: "kube-api-access-g8kcc") pod "efa17eff-40f4-46d8-b24b-b9e46d50c159" (UID: "efa17eff-40f4-46d8-b24b-b9e46d50c159"). InnerVolumeSpecName "kube-api-access-g8kcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:34 crc kubenswrapper[5065]: E1008 13:37:34.286466 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a257f6_4b74_429b_9da0_b76051265822.slice/crio-a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158\": RecentStats: unable to find data in memory cache]" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.290177 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "efa17eff-40f4-46d8-b24b-b9e46d50c159" (UID: "efa17eff-40f4-46d8-b24b-b9e46d50c159"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.318973 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efa17eff-40f4-46d8-b24b-b9e46d50c159" (UID: "efa17eff-40f4-46d8-b24b-b9e46d50c159"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.349873 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8kcc\" (UniqueName: \"kubernetes.io/projected/efa17eff-40f4-46d8-b24b-b9e46d50c159-kube-api-access-g8kcc\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.349931 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.349943 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.349959 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.366270 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data" (OuterVolumeSpecName: "config-data") pod "efa17eff-40f4-46d8-b24b-b9e46d50c159" (UID: "efa17eff-40f4-46d8-b24b-b9e46d50c159"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.452865 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa17eff-40f4-46d8-b24b-b9e46d50c159-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.452857 5065 generic.go:334] "Generic (PLEG): container finished" podID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerID="1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4" exitCode=0 Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.452923 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.452884 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efa17eff-40f4-46d8-b24b-b9e46d50c159","Type":"ContainerDied","Data":"1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4"} Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.452996 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"efa17eff-40f4-46d8-b24b-b9e46d50c159","Type":"ContainerDied","Data":"40f1b63fa99da2cf415669809425ffc1b4c9d38d8dac4ad810d6aee83d4c586d"} Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.453020 5065 scope.go:117] "RemoveContainer" containerID="4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.454361 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2136579b-6d89-48ff-b960-a0401eb9af4c","Type":"ContainerStarted","Data":"1165a7ab05e73eb564b798cf3b70302559ff4c024b8c054a547516dee29972af"} Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.454385 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.461733 5065 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="217108e8-6327-4b34-bfe4-079db2c41cf2" podUID="2136579b-6d89-48ff-b960-a0401eb9af4c" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.473797 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.477399 5065 scope.go:117] "RemoveContainer" containerID="1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.492949 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.504961 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.512316 5065 scope.go:117] "RemoveContainer" containerID="4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3" Oct 08 13:37:34 crc kubenswrapper[5065]: E1008 13:37:34.512801 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3\": container with ID starting with 4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3 not found: ID does not exist" containerID="4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.512838 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3"} err="failed to get container status \"4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3\": rpc error: code = NotFound desc = could not find container \"4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3\": container with ID starting with 4655f23fd5dd0f7f66b4e17690f67bea7a59d70b9de45eca0f0f1728cb678ca3 not found: ID does not exist" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.512867 5065 scope.go:117] "RemoveContainer" containerID="1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4" Oct 08 13:37:34 crc kubenswrapper[5065]: E1008 13:37:34.515300 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4\": container with ID starting with 1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4 not found: ID does not exist" containerID="1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.515341 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4"} err="failed to get container status \"1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4\": rpc error: code = NotFound desc = could not find container \"1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4\": container with ID starting with 1080bd1ab0170e3fd0f72c1dd5d70e0fe75e86124567af46e197714f436555b4 not found: ID does not exist" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.530778 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:34 crc kubenswrapper[5065]: E1008 13:37:34.531228 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="probe" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.531251 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="probe" Oct 08 13:37:34 crc kubenswrapper[5065]: E1008 13:37:34.531298 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="cinder-scheduler" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.531307 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="cinder-scheduler" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.531539 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="probe" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.531570 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" containerName="cinder-scheduler" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.532731 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.534845 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.541022 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.655743 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.655791 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grk9\" (UniqueName: \"kubernetes.io/projected/0d473a1f-35dc-4b20-a344-19c23f1c8c06-kube-api-access-2grk9\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.655908 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.655949 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.655984 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.656108 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d473a1f-35dc-4b20-a344-19c23f1c8c06-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.758039 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.758113 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.758240 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d473a1f-35dc-4b20-a344-19c23f1c8c06-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.758287 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.758313 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2grk9\" (UniqueName: \"kubernetes.io/projected/0d473a1f-35dc-4b20-a344-19c23f1c8c06-kube-api-access-2grk9\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.758376 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.758608 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d473a1f-35dc-4b20-a344-19c23f1c8c06-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.764515 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.765952 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.766004 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.766304 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.779400 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2grk9\" (UniqueName: \"kubernetes.io/projected/0d473a1f-35dc-4b20-a344-19c23f1c8c06-kube-api-access-2grk9\") pod \"cinder-scheduler-0\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.853752 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.889077 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217108e8-6327-4b34-bfe4-079db2c41cf2" path="/var/lib/kubelet/pods/217108e8-6327-4b34-bfe4-079db2c41cf2/volumes" Oct 08 13:37:34 crc kubenswrapper[5065]: I1008 13:37:34.889498 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa17eff-40f4-46d8-b24b-b9e46d50c159" path="/var/lib/kubelet/pods/efa17eff-40f4-46d8-b24b-b9e46d50c159/volumes" Oct 08 13:37:35 crc kubenswrapper[5065]: I1008 13:37:35.306028 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:37:35 crc kubenswrapper[5065]: W1008 13:37:35.317843 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d473a1f_35dc_4b20_a344_19c23f1c8c06.slice/crio-22b3afb5168cedc8711dd30cf6488d464901864b51d4792cb1a58f1c8ddeafc2 WatchSource:0}: Error finding container 22b3afb5168cedc8711dd30cf6488d464901864b51d4792cb1a58f1c8ddeafc2: Status 404 returned error can't find the container with id 22b3afb5168cedc8711dd30cf6488d464901864b51d4792cb1a58f1c8ddeafc2 Oct 08 13:37:35 crc kubenswrapper[5065]: I1008 13:37:35.466362 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d473a1f-35dc-4b20-a344-19c23f1c8c06","Type":"ContainerStarted","Data":"22b3afb5168cedc8711dd30cf6488d464901864b51d4792cb1a58f1c8ddeafc2"} Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.143934 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.298074 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.357853 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595455d844-89td9"] Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.362372 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595455d844-89td9" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api-log" containerID="cri-o://959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874" gracePeriod=30 Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.362512 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595455d844-89td9" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api" containerID="cri-o://a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc" gracePeriod=30 Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.455280 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.515883 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56c8fc79b6-9z5rs"] Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.517463 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56c8fc79b6-9z5rs" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-api" containerID="cri-o://d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411" gracePeriod=30 Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.517792 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56c8fc79b6-9z5rs" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-httpd" containerID="cri-o://40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8" gracePeriod=30 Oct 08 13:37:36 crc kubenswrapper[5065]: I1008 13:37:36.532356 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d473a1f-35dc-4b20-a344-19c23f1c8c06","Type":"ContainerStarted","Data":"031335faa75843deb180f0a407142ad9a9127dae436f5b2f0f90352f75ef55b1"} Oct 08 13:37:37 crc kubenswrapper[5065]: I1008 13:37:37.549280 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d473a1f-35dc-4b20-a344-19c23f1c8c06","Type":"ContainerStarted","Data":"2f9df638248c0e2761238e4dd3780cc9ae247b9e55d3548a3bb207837154cf62"} Oct 08 13:37:37 crc kubenswrapper[5065]: I1008 13:37:37.553440 5065 generic.go:334] "Generic (PLEG): container finished" podID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerID="40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8" exitCode=0 Oct 08 13:37:37 crc kubenswrapper[5065]: I1008 13:37:37.553501 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c8fc79b6-9z5rs" event={"ID":"62f8e978-1ba0-4348-9340-13a1e9083e02","Type":"ContainerDied","Data":"40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8"} Oct 08 13:37:37 crc kubenswrapper[5065]: I1008 13:37:37.556067 5065 generic.go:334] "Generic (PLEG): container finished" podID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerID="959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874" exitCode=143 Oct 08 13:37:37 crc kubenswrapper[5065]: I1008 13:37:37.556112 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595455d844-89td9" event={"ID":"c06929d7-4d71-4a93-9979-e332cf99b06d","Type":"ContainerDied","Data":"959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874"} Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.553301 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.578407 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.578389468 podStartE2EDuration="4.578389468s" podCreationTimestamp="2025-10-08 13:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:37.575019178 +0000 UTC m=+1159.352400935" watchObservedRunningTime="2025-10-08 13:37:38.578389468 +0000 UTC m=+1160.355771215" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.594949 5065 generic.go:334] "Generic (PLEG): container finished" podID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerID="d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411" exitCode=0 Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.597388 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c8fc79b6-9z5rs" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.598448 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c8fc79b6-9z5rs" event={"ID":"62f8e978-1ba0-4348-9340-13a1e9083e02","Type":"ContainerDied","Data":"d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411"} Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.598497 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c8fc79b6-9z5rs" event={"ID":"62f8e978-1ba0-4348-9340-13a1e9083e02","Type":"ContainerDied","Data":"86752f14c63cc747503461a2b205b1301d9a18f0a2e21e85dfb5aacab6aa0a86"} Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.598516 5065 scope.go:117] "RemoveContainer" containerID="40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.627594 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-config\") pod \"62f8e978-1ba0-4348-9340-13a1e9083e02\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.627711 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7dh5\" (UniqueName: \"kubernetes.io/projected/62f8e978-1ba0-4348-9340-13a1e9083e02-kube-api-access-g7dh5\") pod \"62f8e978-1ba0-4348-9340-13a1e9083e02\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.627811 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-combined-ca-bundle\") pod \"62f8e978-1ba0-4348-9340-13a1e9083e02\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.627981 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-ovndb-tls-certs\") pod \"62f8e978-1ba0-4348-9340-13a1e9083e02\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.628079 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-httpd-config\") pod \"62f8e978-1ba0-4348-9340-13a1e9083e02\" (UID: \"62f8e978-1ba0-4348-9340-13a1e9083e02\") " Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.634689 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "62f8e978-1ba0-4348-9340-13a1e9083e02" (UID: "62f8e978-1ba0-4348-9340-13a1e9083e02"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.636771 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f8e978-1ba0-4348-9340-13a1e9083e02-kube-api-access-g7dh5" (OuterVolumeSpecName: "kube-api-access-g7dh5") pod "62f8e978-1ba0-4348-9340-13a1e9083e02" (UID: "62f8e978-1ba0-4348-9340-13a1e9083e02"). InnerVolumeSpecName "kube-api-access-g7dh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.664812 5065 scope.go:117] "RemoveContainer" containerID="d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.697552 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-config" (OuterVolumeSpecName: "config") pod "62f8e978-1ba0-4348-9340-13a1e9083e02" (UID: "62f8e978-1ba0-4348-9340-13a1e9083e02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.723169 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62f8e978-1ba0-4348-9340-13a1e9083e02" (UID: "62f8e978-1ba0-4348-9340-13a1e9083e02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.731352 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.732193 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.732255 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.732278 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7dh5\" (UniqueName: \"kubernetes.io/projected/62f8e978-1ba0-4348-9340-13a1e9083e02-kube-api-access-g7dh5\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.747317 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "62f8e978-1ba0-4348-9340-13a1e9083e02" (UID: "62f8e978-1ba0-4348-9340-13a1e9083e02"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.752678 5065 scope.go:117] "RemoveContainer" containerID="40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8" Oct 08 13:37:38 crc kubenswrapper[5065]: E1008 13:37:38.753258 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8\": container with ID starting with 40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8 not found: ID does not exist" containerID="40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.753297 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8"} err="failed to get container status \"40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8\": rpc error: code = NotFound desc = could not find container \"40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8\": container with ID starting with 40b40b2edd3a5f5cb3362a3a4a3fcc2cd3710d8209b1bc6f335a96d19b99cfc8 not found: ID does not exist" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.753320 5065 scope.go:117] "RemoveContainer" containerID="d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411" Oct 08 13:37:38 crc kubenswrapper[5065]: E1008 13:37:38.753580 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411\": container with ID starting with d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411 not found: ID does not exist" containerID="d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.753607 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411"} err="failed to get container status \"d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411\": rpc error: code = NotFound desc = could not find container \"d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411\": container with ID starting with d06960d03a3ed02bd4d29a02dbc96df37518f4d292074d7695176fa1518cd411 not found: ID does not exist" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.834669 5065 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f8e978-1ba0-4348-9340-13a1e9083e02-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.951119 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56c8fc79b6-9z5rs"] Oct 08 13:37:38 crc kubenswrapper[5065]: I1008 13:37:38.963781 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56c8fc79b6-9z5rs"] Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.076353 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85b95d746c-knffl"] Oct 08 13:37:39 crc kubenswrapper[5065]: E1008 13:37:39.076861 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-httpd" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.076888 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-httpd" Oct 08 13:37:39 crc kubenswrapper[5065]: E1008 13:37:39.076926 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-api" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.076937 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-api" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.077132 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-api" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.077172 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" containerName="neutron-httpd" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.078138 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.080626 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.080850 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.081027 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.107345 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b95d746c-knffl"] Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.138912 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-log-httpd\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.138992 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plxz\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-kube-api-access-5plxz\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.139024 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-public-tls-certs\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.139158 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-config-data\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.139409 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-combined-ca-bundle\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.139475 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-internal-tls-certs\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.139774 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-etc-swift\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.139821 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-run-httpd\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.241031 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-config-data\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.241710 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-combined-ca-bundle\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.241744 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-internal-tls-certs\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.241802 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-etc-swift\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.241847 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-run-httpd\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.241922 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-log-httpd\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.242013 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5plxz\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-kube-api-access-5plxz\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.242055 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-public-tls-certs\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.242779 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-run-httpd\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.242893 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-log-httpd\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.245943 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-internal-tls-certs\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.247258 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-combined-ca-bundle\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.247863 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-etc-swift\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.251383 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-config-data\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.251960 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-public-tls-certs\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.264139 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5plxz\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-kube-api-access-5plxz\") pod \"swift-proxy-85b95d746c-knffl\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.394321 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.397113 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.397386 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-log" containerID="cri-o://0d1e6f22995ab2c22af8be1cb1d3697d181e18f85709f9cc2b049586cc48bc20" gracePeriod=30 Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.397787 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-httpd" containerID="cri-o://0adb44177ccce437d423f3ea76c3e1102bc6b162fc2fc3fd423fd84cf51206b4" gracePeriod=30 Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.467719 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.580627 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595455d844-89td9" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:33380->10.217.0.159:9311: read: connection reset by peer" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.581341 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595455d844-89td9" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:33370->10.217.0.159:9311: read: connection reset by peer" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.647965 5065 generic.go:334] "Generic (PLEG): container finished" podID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerID="0d1e6f22995ab2c22af8be1cb1d3697d181e18f85709f9cc2b049586cc48bc20" exitCode=143 Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.648095 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"372e941a-3d7d-49ee-84c1-9d3d159d603f","Type":"ContainerDied","Data":"0d1e6f22995ab2c22af8be1cb1d3697d181e18f85709f9cc2b049586cc48bc20"} Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.831838 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.832215 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-central-agent" containerID="cri-o://a0b594d00e9d4c672d3568771c0c3ab22b43dcf04a37db230faa87c980a18a20" gracePeriod=30 Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.832369 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="proxy-httpd" containerID="cri-o://2255625d5a288ce5c8fd77abfcc524659b9bfee90576c0aa8da45f00930a1ae8" gracePeriod=30 Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.832457 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="sg-core" containerID="cri-o://a3c731a6cafa20db04649c07e1b1f808507b0a7eb52d39a8a6a3ade2c0fe09de" gracePeriod=30 Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.832514 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-notification-agent" containerID="cri-o://30ef47fe2a73dbe0b6069d7be23b595d7da2da852e42010cdd476e54bf37cdd0" gracePeriod=30 Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.843878 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 13:37:39 crc kubenswrapper[5065]: I1008 13:37:39.855157 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.064982 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.065001 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b95d746c-knffl"] Oct 08 13:37:40 crc kubenswrapper[5065]: W1008 13:37:40.085994 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaf670f8_9cf6_4200_8036_05e9798cad78.slice/crio-6a3261ec5809cdf2cf9598fc1759fd359a19e4343bcd3a70feec10c8c29e1fe4 WatchSource:0}: Error finding container 6a3261ec5809cdf2cf9598fc1759fd359a19e4343bcd3a70feec10c8c29e1fe4: Status 404 returned error can't find the container with id 6a3261ec5809cdf2cf9598fc1759fd359a19e4343bcd3a70feec10c8c29e1fe4 Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.163617 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-combined-ca-bundle\") pod \"c06929d7-4d71-4a93-9979-e332cf99b06d\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.163944 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06929d7-4d71-4a93-9979-e332cf99b06d-logs\") pod \"c06929d7-4d71-4a93-9979-e332cf99b06d\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.164072 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data-custom\") pod \"c06929d7-4d71-4a93-9979-e332cf99b06d\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.164208 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data\") pod \"c06929d7-4d71-4a93-9979-e332cf99b06d\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.164463 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gcqn\" (UniqueName: \"kubernetes.io/projected/c06929d7-4d71-4a93-9979-e332cf99b06d-kube-api-access-8gcqn\") pod \"c06929d7-4d71-4a93-9979-e332cf99b06d\" (UID: \"c06929d7-4d71-4a93-9979-e332cf99b06d\") " Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.166944 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06929d7-4d71-4a93-9979-e332cf99b06d-logs" (OuterVolumeSpecName: "logs") pod "c06929d7-4d71-4a93-9979-e332cf99b06d" (UID: "c06929d7-4d71-4a93-9979-e332cf99b06d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.170759 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06929d7-4d71-4a93-9979-e332cf99b06d-kube-api-access-8gcqn" (OuterVolumeSpecName: "kube-api-access-8gcqn") pod "c06929d7-4d71-4a93-9979-e332cf99b06d" (UID: "c06929d7-4d71-4a93-9979-e332cf99b06d"). InnerVolumeSpecName "kube-api-access-8gcqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.189226 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c06929d7-4d71-4a93-9979-e332cf99b06d" (UID: "c06929d7-4d71-4a93-9979-e332cf99b06d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.199997 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c06929d7-4d71-4a93-9979-e332cf99b06d" (UID: "c06929d7-4d71-4a93-9979-e332cf99b06d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.257447 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data" (OuterVolumeSpecName: "config-data") pod "c06929d7-4d71-4a93-9979-e332cf99b06d" (UID: "c06929d7-4d71-4a93-9979-e332cf99b06d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.266869 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gcqn\" (UniqueName: \"kubernetes.io/projected/c06929d7-4d71-4a93-9979-e332cf99b06d-kube-api-access-8gcqn\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.266904 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.266916 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c06929d7-4d71-4a93-9979-e332cf99b06d-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.266927 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.266937 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06929d7-4d71-4a93-9979-e332cf99b06d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.664733 5065 generic.go:334] "Generic (PLEG): container finished" podID="a25d389f-4c59-4f3f-b110-291380171975" containerID="2255625d5a288ce5c8fd77abfcc524659b9bfee90576c0aa8da45f00930a1ae8" exitCode=0 Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.664767 5065 generic.go:334] "Generic (PLEG): container finished" podID="a25d389f-4c59-4f3f-b110-291380171975" containerID="a3c731a6cafa20db04649c07e1b1f808507b0a7eb52d39a8a6a3ade2c0fe09de" exitCode=2 Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.664776 5065 generic.go:334] "Generic (PLEG): container finished" podID="a25d389f-4c59-4f3f-b110-291380171975" containerID="a0b594d00e9d4c672d3568771c0c3ab22b43dcf04a37db230faa87c980a18a20" exitCode=0 Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.664816 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerDied","Data":"2255625d5a288ce5c8fd77abfcc524659b9bfee90576c0aa8da45f00930a1ae8"} Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.664861 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerDied","Data":"a3c731a6cafa20db04649c07e1b1f808507b0a7eb52d39a8a6a3ade2c0fe09de"} Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.664877 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerDied","Data":"a0b594d00e9d4c672d3568771c0c3ab22b43dcf04a37db230faa87c980a18a20"} Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.666813 5065 generic.go:334] "Generic (PLEG): container finished" podID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerID="a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc" exitCode=0 Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.666865 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595455d844-89td9" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.666878 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595455d844-89td9" event={"ID":"c06929d7-4d71-4a93-9979-e332cf99b06d","Type":"ContainerDied","Data":"a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc"} Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.666897 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595455d844-89td9" event={"ID":"c06929d7-4d71-4a93-9979-e332cf99b06d","Type":"ContainerDied","Data":"67fa0198f174e476066f25af2e6e67bf69c1adc6a666869e9fcb30770dba46ad"} Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.666915 5065 scope.go:117] "RemoveContainer" containerID="a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.673967 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b95d746c-knffl" event={"ID":"caf670f8-9cf6-4200-8036-05e9798cad78","Type":"ContainerStarted","Data":"3c914d8505b39861ccdfff5dcc013fed1cc1c91fa6e400a924167a096e07326c"} Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.674079 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b95d746c-knffl" event={"ID":"caf670f8-9cf6-4200-8036-05e9798cad78","Type":"ContainerStarted","Data":"6a3261ec5809cdf2cf9598fc1759fd359a19e4343bcd3a70feec10c8c29e1fe4"} Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.699837 5065 scope.go:117] "RemoveContainer" containerID="959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.716671 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595455d844-89td9"] Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.719999 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-595455d844-89td9"] Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.741219 5065 scope.go:117] "RemoveContainer" containerID="a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc" Oct 08 13:37:40 crc kubenswrapper[5065]: E1008 13:37:40.742341 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc\": container with ID starting with a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc not found: ID does not exist" containerID="a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.742382 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc"} err="failed to get container status \"a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc\": rpc error: code = NotFound desc = could not find container \"a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc\": container with ID starting with a3359081ffe672c62bcbed33bcf184a5513b037e9d4a90cc2820d0d18acafdcc not found: ID does not exist" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.742407 5065 scope.go:117] "RemoveContainer" containerID="959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874" Oct 08 13:37:40 crc kubenswrapper[5065]: E1008 13:37:40.745403 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874\": container with ID starting with 959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874 not found: ID does not exist" containerID="959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.745446 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874"} err="failed to get container status \"959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874\": rpc error: code = NotFound desc = could not find container \"959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874\": container with ID starting with 959d0d6343d4a5d530ce9555753f855355787cacbaec3aa36d954ed702709874 not found: ID does not exist" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.748159 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.748445 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-log" containerID="cri-o://23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db" gracePeriod=30 Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.748601 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-httpd" containerID="cri-o://a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23" gracePeriod=30 Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.886797 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f8e978-1ba0-4348-9340-13a1e9083e02" path="/var/lib/kubelet/pods/62f8e978-1ba0-4348-9340-13a1e9083e02/volumes" Oct 08 13:37:40 crc kubenswrapper[5065]: I1008 13:37:40.887694 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" path="/var/lib/kubelet/pods/c06929d7-4d71-4a93-9979-e332cf99b06d/volumes" Oct 08 13:37:41 crc kubenswrapper[5065]: I1008 13:37:41.688527 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b95d746c-knffl" event={"ID":"caf670f8-9cf6-4200-8036-05e9798cad78","Type":"ContainerStarted","Data":"b8ae852d80aae6acad85780abb16c8854f1fddd5178e43292b6c487839c29fc5"} Oct 08 13:37:41 crc kubenswrapper[5065]: I1008 13:37:41.689785 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:41 crc kubenswrapper[5065]: I1008 13:37:41.689813 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:41 crc kubenswrapper[5065]: I1008 13:37:41.696565 5065 generic.go:334] "Generic (PLEG): container finished" podID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerID="23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db" exitCode=143 Oct 08 13:37:41 crc kubenswrapper[5065]: I1008 13:37:41.696657 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e25f362-2a7f-48cb-b91f-18938713da5b","Type":"ContainerDied","Data":"23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db"} Oct 08 13:37:41 crc kubenswrapper[5065]: I1008 13:37:41.725608 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85b95d746c-knffl" podStartSLOduration=2.725584719 podStartE2EDuration="2.725584719s" podCreationTimestamp="2025-10-08 13:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:41.72053189 +0000 UTC m=+1163.497913657" watchObservedRunningTime="2025-10-08 13:37:41.725584719 +0000 UTC m=+1163.502966476" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.561946 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xn6cj"] Oct 08 13:37:42 crc kubenswrapper[5065]: E1008 13:37:42.562586 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api-log" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.562601 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api-log" Oct 08 13:37:42 crc kubenswrapper[5065]: E1008 13:37:42.562617 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.562622 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.562802 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.562818 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06929d7-4d71-4a93-9979-e332cf99b06d" containerName="barbican-api-log" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.563392 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xn6cj" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.573563 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": read tcp 10.217.0.2:55212->10.217.0.149:9292: read: connection reset by peer" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.573837 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xn6cj"] Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.573883 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": read tcp 10.217.0.2:55208->10.217.0.149:9292: read: connection reset by peer" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.609173 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rwm\" (UniqueName: \"kubernetes.io/projected/b2d90dc7-e101-4bde-b8b3-e3c13e788004-kube-api-access-k8rwm\") pod \"nova-api-db-create-xn6cj\" (UID: \"b2d90dc7-e101-4bde-b8b3-e3c13e788004\") " pod="openstack/nova-api-db-create-xn6cj" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.657039 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-65r2q"] Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.661780 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-65r2q" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.687464 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-65r2q"] Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.711156 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8rwm\" (UniqueName: \"kubernetes.io/projected/b2d90dc7-e101-4bde-b8b3-e3c13e788004-kube-api-access-k8rwm\") pod \"nova-api-db-create-xn6cj\" (UID: \"b2d90dc7-e101-4bde-b8b3-e3c13e788004\") " pod="openstack/nova-api-db-create-xn6cj" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.711244 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x4j9\" (UniqueName: \"kubernetes.io/projected/5e89a553-dbd3-47d3-a187-c51aa149175c-kube-api-access-8x4j9\") pod \"nova-cell0-db-create-65r2q\" (UID: \"5e89a553-dbd3-47d3-a187-c51aa149175c\") " pod="openstack/nova-cell0-db-create-65r2q" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.713455 5065 generic.go:334] "Generic (PLEG): container finished" podID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerID="0adb44177ccce437d423f3ea76c3e1102bc6b162fc2fc3fd423fd84cf51206b4" exitCode=0 Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.713507 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"372e941a-3d7d-49ee-84c1-9d3d159d603f","Type":"ContainerDied","Data":"0adb44177ccce437d423f3ea76c3e1102bc6b162fc2fc3fd423fd84cf51206b4"} Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.717594 5065 generic.go:334] "Generic (PLEG): container finished" podID="a25d389f-4c59-4f3f-b110-291380171975" containerID="30ef47fe2a73dbe0b6069d7be23b595d7da2da852e42010cdd476e54bf37cdd0" exitCode=0 Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.718190 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerDied","Data":"30ef47fe2a73dbe0b6069d7be23b595d7da2da852e42010cdd476e54bf37cdd0"} Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.760551 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8rwm\" (UniqueName: \"kubernetes.io/projected/b2d90dc7-e101-4bde-b8b3-e3c13e788004-kube-api-access-k8rwm\") pod \"nova-api-db-create-xn6cj\" (UID: \"b2d90dc7-e101-4bde-b8b3-e3c13e788004\") " pod="openstack/nova-api-db-create-xn6cj" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.777486 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gx42n"] Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.778669 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gx42n" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.791702 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gx42n"] Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.814403 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x4j9\" (UniqueName: \"kubernetes.io/projected/5e89a553-dbd3-47d3-a187-c51aa149175c-kube-api-access-8x4j9\") pod \"nova-cell0-db-create-65r2q\" (UID: \"5e89a553-dbd3-47d3-a187-c51aa149175c\") " pod="openstack/nova-cell0-db-create-65r2q" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.814590 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8b8f\" (UniqueName: \"kubernetes.io/projected/653cf006-7306-4e5a-b70c-8454b8c47b2d-kube-api-access-r8b8f\") pod \"nova-cell1-db-create-gx42n\" (UID: \"653cf006-7306-4e5a-b70c-8454b8c47b2d\") " pod="openstack/nova-cell1-db-create-gx42n" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.845646 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x4j9\" (UniqueName: \"kubernetes.io/projected/5e89a553-dbd3-47d3-a187-c51aa149175c-kube-api-access-8x4j9\") pod \"nova-cell0-db-create-65r2q\" (UID: \"5e89a553-dbd3-47d3-a187-c51aa149175c\") " pod="openstack/nova-cell0-db-create-65r2q" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.914407 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xn6cj" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.916091 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8b8f\" (UniqueName: \"kubernetes.io/projected/653cf006-7306-4e5a-b70c-8454b8c47b2d-kube-api-access-r8b8f\") pod \"nova-cell1-db-create-gx42n\" (UID: \"653cf006-7306-4e5a-b70c-8454b8c47b2d\") " pod="openstack/nova-cell1-db-create-gx42n" Oct 08 13:37:42 crc kubenswrapper[5065]: I1008 13:37:42.975872 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8b8f\" (UniqueName: \"kubernetes.io/projected/653cf006-7306-4e5a-b70c-8454b8c47b2d-kube-api-access-r8b8f\") pod \"nova-cell1-db-create-gx42n\" (UID: \"653cf006-7306-4e5a-b70c-8454b8c47b2d\") " pod="openstack/nova-cell1-db-create-gx42n" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.055830 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-65r2q" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.075474 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gx42n" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.120133 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.220799 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-config-data\") pod \"a25d389f-4c59-4f3f-b110-291380171975\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.221088 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-scripts\") pod \"a25d389f-4c59-4f3f-b110-291380171975\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.221156 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-sg-core-conf-yaml\") pod \"a25d389f-4c59-4f3f-b110-291380171975\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.221217 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-run-httpd\") pod \"a25d389f-4c59-4f3f-b110-291380171975\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.221341 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-log-httpd\") pod \"a25d389f-4c59-4f3f-b110-291380171975\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.221409 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-combined-ca-bundle\") pod \"a25d389f-4c59-4f3f-b110-291380171975\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.221467 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8c9\" (UniqueName: \"kubernetes.io/projected/a25d389f-4c59-4f3f-b110-291380171975-kube-api-access-gv8c9\") pod \"a25d389f-4c59-4f3f-b110-291380171975\" (UID: \"a25d389f-4c59-4f3f-b110-291380171975\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.222673 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a25d389f-4c59-4f3f-b110-291380171975" (UID: "a25d389f-4c59-4f3f-b110-291380171975"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.223314 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a25d389f-4c59-4f3f-b110-291380171975" (UID: "a25d389f-4c59-4f3f-b110-291380171975"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.233696 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25d389f-4c59-4f3f-b110-291380171975-kube-api-access-gv8c9" (OuterVolumeSpecName: "kube-api-access-gv8c9") pod "a25d389f-4c59-4f3f-b110-291380171975" (UID: "a25d389f-4c59-4f3f-b110-291380171975"). InnerVolumeSpecName "kube-api-access-gv8c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.259720 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-scripts" (OuterVolumeSpecName: "scripts") pod "a25d389f-4c59-4f3f-b110-291380171975" (UID: "a25d389f-4c59-4f3f-b110-291380171975"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.294894 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a25d389f-4c59-4f3f-b110-291380171975" (UID: "a25d389f-4c59-4f3f-b110-291380171975"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.318251 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.323511 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.323541 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8c9\" (UniqueName: \"kubernetes.io/projected/a25d389f-4c59-4f3f-b110-291380171975-kube-api-access-gv8c9\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.323551 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.323559 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.323568 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a25d389f-4c59-4f3f-b110-291380171975-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.362989 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a25d389f-4c59-4f3f-b110-291380171975" (UID: "a25d389f-4c59-4f3f-b110-291380171975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424339 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd58l\" (UniqueName: \"kubernetes.io/projected/372e941a-3d7d-49ee-84c1-9d3d159d603f-kube-api-access-xd58l\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424399 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-combined-ca-bundle\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424496 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-httpd-run\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424639 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424717 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-public-tls-certs\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424756 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-scripts\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424782 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-config-data\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.424825 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-logs\") pod \"372e941a-3d7d-49ee-84c1-9d3d159d603f\" (UID: \"372e941a-3d7d-49ee-84c1-9d3d159d603f\") " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.425434 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.426051 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-logs" (OuterVolumeSpecName: "logs") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.428718 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.433097 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372e941a-3d7d-49ee-84c1-9d3d159d603f-kube-api-access-xd58l" (OuterVolumeSpecName: "kube-api-access-xd58l") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "kube-api-access-xd58l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.435801 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-scripts" (OuterVolumeSpecName: "scripts") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.448520 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-config-data" (OuterVolumeSpecName: "config-data") pod "a25d389f-4c59-4f3f-b110-291380171975" (UID: "a25d389f-4c59-4f3f-b110-291380171975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.451548 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.468905 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.482046 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-config-data" (OuterVolumeSpecName: "config-data") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.503908 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "372e941a-3d7d-49ee-84c1-9d3d159d603f" (UID: "372e941a-3d7d-49ee-84c1-9d3d159d603f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526504 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526546 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526560 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526570 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd58l\" (UniqueName: \"kubernetes.io/projected/372e941a-3d7d-49ee-84c1-9d3d159d603f-kube-api-access-xd58l\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526583 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526595 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/372e941a-3d7d-49ee-84c1-9d3d159d603f-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526612 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25d389f-4c59-4f3f-b110-291380171975-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526646 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.526661 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/372e941a-3d7d-49ee-84c1-9d3d159d603f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.569164 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.629642 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.687041 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xn6cj"] Oct 08 13:37:43 crc kubenswrapper[5065]: W1008 13:37:43.705388 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2d90dc7_e101_4bde_b8b3_e3c13e788004.slice/crio-a47c07788842f718664fef62e8b92ac748cd1a6701c4cf9792105560351ded44 WatchSource:0}: Error finding container a47c07788842f718664fef62e8b92ac748cd1a6701c4cf9792105560351ded44: Status 404 returned error can't find the container with id a47c07788842f718664fef62e8b92ac748cd1a6701c4cf9792105560351ded44 Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.798231 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"372e941a-3d7d-49ee-84c1-9d3d159d603f","Type":"ContainerDied","Data":"98aa51f6c3b6c9e45f7a9b1963524bc8d86236beeeb2e40b40c92ea6500b2e6c"} Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.798277 5065 scope.go:117] "RemoveContainer" containerID="0adb44177ccce437d423f3ea76c3e1102bc6b162fc2fc3fd423fd84cf51206b4" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.798463 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.843659 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xn6cj" event={"ID":"b2d90dc7-e101-4bde-b8b3-e3c13e788004","Type":"ContainerStarted","Data":"a47c07788842f718664fef62e8b92ac748cd1a6701c4cf9792105560351ded44"} Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.865652 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gx42n"] Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.898483 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-65r2q"] Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.900528 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a25d389f-4c59-4f3f-b110-291380171975","Type":"ContainerDied","Data":"9ecaf46547de7d9fe2b426a92e9ba185fa29f9909aee309af50f7e4dd6de572b"} Oct 08 13:37:43 crc kubenswrapper[5065]: I1008 13:37:43.900693 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.015568 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.026118 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.040823 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.061275 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: E1008 13:37:44.061787 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-notification-agent" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.061805 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-notification-agent" Oct 08 13:37:44 crc kubenswrapper[5065]: E1008 13:37:44.061822 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-central-agent" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.061828 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-central-agent" Oct 08 13:37:44 crc kubenswrapper[5065]: E1008 13:37:44.061840 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-log" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.061846 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-log" Oct 08 13:37:44 crc kubenswrapper[5065]: E1008 13:37:44.061856 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-httpd" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.061863 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-httpd" Oct 08 13:37:44 crc kubenswrapper[5065]: E1008 13:37:44.061878 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="proxy-httpd" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.061884 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="proxy-httpd" Oct 08 13:37:44 crc kubenswrapper[5065]: E1008 13:37:44.061894 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="sg-core" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.061899 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="sg-core" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.062065 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-central-agent" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.066470 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="sg-core" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.066515 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-httpd" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.066528 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="ceilometer-notification-agent" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.066552 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" containerName="glance-log" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.066568 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d389f-4c59-4f3f-b110-291380171975" containerName="proxy-httpd" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.079643 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.079745 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.088401 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.090198 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.090377 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.101939 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.108192 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.111407 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.111602 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.129543 5065 scope.go:117] "RemoveContainer" containerID="0d1e6f22995ab2c22af8be1cb1d3697d181e18f85709f9cc2b049586cc48bc20" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.132120 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151304 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-log-httpd\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151355 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151391 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-scripts\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151475 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151582 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151623 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151740 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151787 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151806 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151876 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-logs\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151894 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-config-data\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151918 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-run-httpd\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151945 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.151964 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bshtl\" (UniqueName: \"kubernetes.io/projected/8cea80f5-d915-459c-9882-4ce114929ab4-kube-api-access-bshtl\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.152042 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzl9j\" (UniqueName: \"kubernetes.io/projected/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-kube-api-access-kzl9j\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.255931 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.255985 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.256007 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.256045 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.256718 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.256785 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.256806 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257165 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-logs\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257182 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-config-data\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257205 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-run-httpd\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257231 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257251 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bshtl\" (UniqueName: \"kubernetes.io/projected/8cea80f5-d915-459c-9882-4ce114929ab4-kube-api-access-bshtl\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257317 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzl9j\" (UniqueName: \"kubernetes.io/projected/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-kube-api-access-kzl9j\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257383 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-log-httpd\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257436 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.257478 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-scripts\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.259000 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.260727 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-log-httpd\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.261228 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-logs\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.262465 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-run-httpd\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.264054 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.265083 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-config-data\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.265915 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.269407 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.269767 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.270014 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.270367 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.270539 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-scripts\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.275013 5065 scope.go:117] "RemoveContainer" containerID="2255625d5a288ce5c8fd77abfcc524659b9bfee90576c0aa8da45f00930a1ae8" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.276995 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzl9j\" (UniqueName: \"kubernetes.io/projected/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-kube-api-access-kzl9j\") pod \"ceilometer-0\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.277155 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bshtl\" (UniqueName: \"kubernetes.io/projected/8cea80f5-d915-459c-9882-4ce114929ab4-kube-api-access-bshtl\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.299559 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.347247 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.364024 5065 scope.go:117] "RemoveContainer" containerID="a3c731a6cafa20db04649c07e1b1f808507b0a7eb52d39a8a6a3ade2c0fe09de" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.376777 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.398633 5065 scope.go:117] "RemoveContainer" containerID="30ef47fe2a73dbe0b6069d7be23b595d7da2da852e42010cdd476e54bf37cdd0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.454138 5065 scope.go:117] "RemoveContainer" containerID="a0b594d00e9d4c672d3568771c0c3ab22b43dcf04a37db230faa87c980a18a20" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.742117 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769462 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5kj\" (UniqueName: \"kubernetes.io/projected/7e25f362-2a7f-48cb-b91f-18938713da5b-kube-api-access-5l5kj\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769564 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-internal-tls-certs\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769638 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769723 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-logs\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769753 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-combined-ca-bundle\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769826 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-config-data\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769882 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-scripts\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.769936 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-httpd-run\") pod \"7e25f362-2a7f-48cb-b91f-18938713da5b\" (UID: \"7e25f362-2a7f-48cb-b91f-18938713da5b\") " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.775682 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-logs" (OuterVolumeSpecName: "logs") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.777615 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.791328 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-scripts" (OuterVolumeSpecName: "scripts") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.792473 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: E1008 13:37:44.801514 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod653cf006_7306_4e5a_b70c_8454b8c47b2d.slice/crio-conmon-4bbb945b81810d9acdd912221c5f3e90864bd3b29747eef97e3c2f2775c00372.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a257f6_4b74_429b_9da0_b76051265822.slice/crio-a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158\": RecentStats: unable to find data in memory cache]" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.824796 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.828034 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e25f362-2a7f-48cb-b91f-18938713da5b-kube-api-access-5l5kj" (OuterVolumeSpecName: "kube-api-access-5l5kj") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "kube-api-access-5l5kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.847900 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.862209 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-config-data" (OuterVolumeSpecName: "config-data") pod "7e25f362-2a7f-48cb-b91f-18938713da5b" (UID: "7e25f362-2a7f-48cb-b91f-18938713da5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.876986 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.877099 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.877173 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5kj\" (UniqueName: \"kubernetes.io/projected/7e25f362-2a7f-48cb-b91f-18938713da5b-kube-api-access-5l5kj\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.877235 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.877304 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.877366 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e25f362-2a7f-48cb-b91f-18938713da5b-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.877465 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.877534 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e25f362-2a7f-48cb-b91f-18938713da5b-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.900613 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372e941a-3d7d-49ee-84c1-9d3d159d603f" path="/var/lib/kubelet/pods/372e941a-3d7d-49ee-84c1-9d3d159d603f/volumes" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.904282 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25d389f-4c59-4f3f-b110-291380171975" path="/var/lib/kubelet/pods/a25d389f-4c59-4f3f-b110-291380171975/volumes" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.929110 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.933303 5065 generic.go:334] "Generic (PLEG): container finished" podID="b2d90dc7-e101-4bde-b8b3-e3c13e788004" containerID="6bbb9432076f2bffcec8c43a13c477be8b205975b484bb5a7624470f0199c513" exitCode=0 Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.933365 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xn6cj" event={"ID":"b2d90dc7-e101-4bde-b8b3-e3c13e788004","Type":"ContainerDied","Data":"6bbb9432076f2bffcec8c43a13c477be8b205975b484bb5a7624470f0199c513"} Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.948692 5065 generic.go:334] "Generic (PLEG): container finished" podID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerID="a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23" exitCode=0 Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.948854 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.948783 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e25f362-2a7f-48cb-b91f-18938713da5b","Type":"ContainerDied","Data":"a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23"} Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.948972 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7e25f362-2a7f-48cb-b91f-18938713da5b","Type":"ContainerDied","Data":"2af75fa2a068a9d70c7f2353568ac194a6979ec123e3fb9de5ff48d904aafda2"} Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.948997 5065 scope.go:117] "RemoveContainer" containerID="a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23" Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.960648 5065 generic.go:334] "Generic (PLEG): container finished" podID="5e89a553-dbd3-47d3-a187-c51aa149175c" containerID="bc51ebb968114d471bff7d14523f9bde9ceaa701634b0fc011f4d1ffe0c615db" exitCode=0 Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.960835 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-65r2q" event={"ID":"5e89a553-dbd3-47d3-a187-c51aa149175c","Type":"ContainerDied","Data":"bc51ebb968114d471bff7d14523f9bde9ceaa701634b0fc011f4d1ffe0c615db"} Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.960861 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-65r2q" event={"ID":"5e89a553-dbd3-47d3-a187-c51aa149175c","Type":"ContainerStarted","Data":"86d433a74dab3beb5e7f783226dbf70f61fb64d593bea9dc95a296c731566705"} Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.966083 5065 generic.go:334] "Generic (PLEG): container finished" podID="653cf006-7306-4e5a-b70c-8454b8c47b2d" containerID="4bbb945b81810d9acdd912221c5f3e90864bd3b29747eef97e3c2f2775c00372" exitCode=0 Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.966142 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gx42n" event={"ID":"653cf006-7306-4e5a-b70c-8454b8c47b2d","Type":"ContainerDied","Data":"4bbb945b81810d9acdd912221c5f3e90864bd3b29747eef97e3c2f2775c00372"} Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.966175 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gx42n" event={"ID":"653cf006-7306-4e5a-b70c-8454b8c47b2d","Type":"ContainerStarted","Data":"2aedfc06ceabf6f426372ab276b6df6de8a3f53993fd5250f3a84a0038cd41f1"} Oct 08 13:37:44 crc kubenswrapper[5065]: I1008 13:37:44.980871 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.000408 5065 scope.go:117] "RemoveContainer" containerID="23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.008369 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.025265 5065 scope.go:117] "RemoveContainer" containerID="a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.028307 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:37:45 crc kubenswrapper[5065]: E1008 13:37:45.029882 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23\": container with ID starting with a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23 not found: ID does not exist" containerID="a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.030012 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23"} err="failed to get container status \"a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23\": rpc error: code = NotFound desc = could not find container \"a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23\": container with ID starting with a07a2095a71689cf1898efb6bd35c1151e37d73530fa2963acfbb21af6689e23 not found: ID does not exist" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.030096 5065 scope.go:117] "RemoveContainer" containerID="23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db" Oct 08 13:37:45 crc kubenswrapper[5065]: E1008 13:37:45.030400 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db\": container with ID starting with 23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db not found: ID does not exist" containerID="23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.030444 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db"} err="failed to get container status \"23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db\": rpc error: code = NotFound desc = could not find container \"23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db\": container with ID starting with 23444c74e68ca2731c3fdae6d6012d70c465c50587151ed8258d49e05535c7db not found: ID does not exist" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.049926 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:37:45 crc kubenswrapper[5065]: E1008 13:37:45.050264 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-log" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.050275 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-log" Oct 08 13:37:45 crc kubenswrapper[5065]: E1008 13:37:45.050293 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-httpd" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.050299 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-httpd" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.050548 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-log" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.050570 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" containerName="glance-httpd" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.051624 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.056625 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.059650 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.060149 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186085 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186166 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186196 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186226 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnf6r\" (UniqueName: \"kubernetes.io/projected/fa6e8e72-d895-4018-a176-978d7975d8a6-kube-api-access-xnf6r\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186266 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186337 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186400 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.186513 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.191144 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.256314 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.268391 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288093 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288155 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288195 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288228 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288247 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288267 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnf6r\" (UniqueName: \"kubernetes.io/projected/fa6e8e72-d895-4018-a176-978d7975d8a6-kube-api-access-xnf6r\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288294 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288328 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.288790 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.289049 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.290690 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.294451 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.294822 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.296271 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.308357 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnf6r\" (UniqueName: \"kubernetes.io/projected/fa6e8e72-d895-4018-a176-978d7975d8a6-kube-api-access-xnf6r\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.311557 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.321780 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.378618 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.942314 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:37:45 crc kubenswrapper[5065]: W1008 13:37:45.956845 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa6e8e72_d895_4018_a176_978d7975d8a6.slice/crio-484a5d6e6ba35b1c285960d8d99c16069f4b85f2614ea43f3930f6af39b3888e WatchSource:0}: Error finding container 484a5d6e6ba35b1c285960d8d99c16069f4b85f2614ea43f3930f6af39b3888e: Status 404 returned error can't find the container with id 484a5d6e6ba35b1c285960d8d99c16069f4b85f2614ea43f3930f6af39b3888e Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.991360 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerStarted","Data":"037a6b37694963b7a5760a0ad035c6e193856505bf7691d3ab3c0b42717d8c62"} Oct 08 13:37:45 crc kubenswrapper[5065]: I1008 13:37:45.993220 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6e8e72-d895-4018-a176-978d7975d8a6","Type":"ContainerStarted","Data":"484a5d6e6ba35b1c285960d8d99c16069f4b85f2614ea43f3930f6af39b3888e"} Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.005533 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cea80f5-d915-459c-9882-4ce114929ab4","Type":"ContainerStarted","Data":"9695c1bae924698969321b11dcdf8c9c61d8845781c793deab0657229ef902be"} Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.506879 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gx42n" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.510781 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-65r2q" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.521087 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xn6cj" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.619514 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x4j9\" (UniqueName: \"kubernetes.io/projected/5e89a553-dbd3-47d3-a187-c51aa149175c-kube-api-access-8x4j9\") pod \"5e89a553-dbd3-47d3-a187-c51aa149175c\" (UID: \"5e89a553-dbd3-47d3-a187-c51aa149175c\") " Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.619802 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8b8f\" (UniqueName: \"kubernetes.io/projected/653cf006-7306-4e5a-b70c-8454b8c47b2d-kube-api-access-r8b8f\") pod \"653cf006-7306-4e5a-b70c-8454b8c47b2d\" (UID: \"653cf006-7306-4e5a-b70c-8454b8c47b2d\") " Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.626245 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e89a553-dbd3-47d3-a187-c51aa149175c-kube-api-access-8x4j9" (OuterVolumeSpecName: "kube-api-access-8x4j9") pod "5e89a553-dbd3-47d3-a187-c51aa149175c" (UID: "5e89a553-dbd3-47d3-a187-c51aa149175c"). InnerVolumeSpecName "kube-api-access-8x4j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.639853 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653cf006-7306-4e5a-b70c-8454b8c47b2d-kube-api-access-r8b8f" (OuterVolumeSpecName: "kube-api-access-r8b8f") pod "653cf006-7306-4e5a-b70c-8454b8c47b2d" (UID: "653cf006-7306-4e5a-b70c-8454b8c47b2d"). InnerVolumeSpecName "kube-api-access-r8b8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.745853 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8rwm\" (UniqueName: \"kubernetes.io/projected/b2d90dc7-e101-4bde-b8b3-e3c13e788004-kube-api-access-k8rwm\") pod \"b2d90dc7-e101-4bde-b8b3-e3c13e788004\" (UID: \"b2d90dc7-e101-4bde-b8b3-e3c13e788004\") " Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.746213 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8b8f\" (UniqueName: \"kubernetes.io/projected/653cf006-7306-4e5a-b70c-8454b8c47b2d-kube-api-access-r8b8f\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.746236 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x4j9\" (UniqueName: \"kubernetes.io/projected/5e89a553-dbd3-47d3-a187-c51aa149175c-kube-api-access-8x4j9\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.752639 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d90dc7-e101-4bde-b8b3-e3c13e788004-kube-api-access-k8rwm" (OuterVolumeSpecName: "kube-api-access-k8rwm") pod "b2d90dc7-e101-4bde-b8b3-e3c13e788004" (UID: "b2d90dc7-e101-4bde-b8b3-e3c13e788004"). InnerVolumeSpecName "kube-api-access-k8rwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.848545 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8rwm\" (UniqueName: \"kubernetes.io/projected/b2d90dc7-e101-4bde-b8b3-e3c13e788004-kube-api-access-k8rwm\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:46 crc kubenswrapper[5065]: I1008 13:37:46.914323 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e25f362-2a7f-48cb-b91f-18938713da5b" path="/var/lib/kubelet/pods/7e25f362-2a7f-48cb-b91f-18938713da5b/volumes" Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.022551 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gx42n" event={"ID":"653cf006-7306-4e5a-b70c-8454b8c47b2d","Type":"ContainerDied","Data":"2aedfc06ceabf6f426372ab276b6df6de8a3f53993fd5250f3a84a0038cd41f1"} Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.022823 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aedfc06ceabf6f426372ab276b6df6de8a3f53993fd5250f3a84a0038cd41f1" Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.022675 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gx42n" Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.027095 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cea80f5-d915-459c-9882-4ce114929ab4","Type":"ContainerStarted","Data":"f0213aea8bcbd4774261ea9ed66c3a96af420735c302c67b2721cd71e709425b"} Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.032303 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerStarted","Data":"ba3bdd3d5cf534782fdff003ee27a2389100b1d049abcf67d4714c43d809bfdb"} Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.032370 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerStarted","Data":"b90f7cd2d54c2136944572dd894c19926d029a44064aecb999a6be5cc08c0876"} Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.041005 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xn6cj" event={"ID":"b2d90dc7-e101-4bde-b8b3-e3c13e788004","Type":"ContainerDied","Data":"a47c07788842f718664fef62e8b92ac748cd1a6701c4cf9792105560351ded44"} Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.041044 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47c07788842f718664fef62e8b92ac748cd1a6701c4cf9792105560351ded44" Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.041108 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xn6cj" Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.048161 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-65r2q" event={"ID":"5e89a553-dbd3-47d3-a187-c51aa149175c","Type":"ContainerDied","Data":"86d433a74dab3beb5e7f783226dbf70f61fb64d593bea9dc95a296c731566705"} Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.048189 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d433a74dab3beb5e7f783226dbf70f61fb64d593bea9dc95a296c731566705" Oct 08 13:37:47 crc kubenswrapper[5065]: I1008 13:37:47.048233 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-65r2q" Oct 08 13:37:48 crc kubenswrapper[5065]: I1008 13:37:48.059533 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cea80f5-d915-459c-9882-4ce114929ab4","Type":"ContainerStarted","Data":"ed14fb4d41bcd1662b46fbaba5448c7edb1bddf5c128eb75189e64b2a3222c21"} Oct 08 13:37:48 crc kubenswrapper[5065]: I1008 13:37:48.062923 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerStarted","Data":"678c8eb1820f2d56e60243bea99de8ad2267d29eed0c6e57387f95fd6fdeb528"} Oct 08 13:37:48 crc kubenswrapper[5065]: I1008 13:37:48.065137 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6e8e72-d895-4018-a176-978d7975d8a6","Type":"ContainerStarted","Data":"7874ec49d85eba9cfb91c420e3419d8a588bba8034f460f6177db2652682d633"} Oct 08 13:37:48 crc kubenswrapper[5065]: I1008 13:37:48.065162 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6e8e72-d895-4018-a176-978d7975d8a6","Type":"ContainerStarted","Data":"7075a38356042d5e366d951a3d1e78c1621a5061aada6f743076696bfacd2c63"} Oct 08 13:37:48 crc kubenswrapper[5065]: I1008 13:37:48.090927 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.090904259 podStartE2EDuration="4.090904259s" podCreationTimestamp="2025-10-08 13:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:48.081386613 +0000 UTC m=+1169.858768370" watchObservedRunningTime="2025-10-08 13:37:48.090904259 +0000 UTC m=+1169.868286016" Oct 08 13:37:48 crc kubenswrapper[5065]: I1008 13:37:48.109211 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.10919218 podStartE2EDuration="4.10919218s" podCreationTimestamp="2025-10-08 13:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:37:48.108391817 +0000 UTC m=+1169.885773604" watchObservedRunningTime="2025-10-08 13:37:48.10919218 +0000 UTC m=+1169.886573937" Oct 08 13:37:49 crc kubenswrapper[5065]: I1008 13:37:49.400890 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:49 crc kubenswrapper[5065]: I1008 13:37:49.401322 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.697564 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6610-account-create-zpq4j"] Oct 08 13:37:52 crc kubenswrapper[5065]: E1008 13:37:52.698208 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d90dc7-e101-4bde-b8b3-e3c13e788004" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.698221 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d90dc7-e101-4bde-b8b3-e3c13e788004" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: E1008 13:37:52.698242 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e89a553-dbd3-47d3-a187-c51aa149175c" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.698249 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e89a553-dbd3-47d3-a187-c51aa149175c" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: E1008 13:37:52.698260 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653cf006-7306-4e5a-b70c-8454b8c47b2d" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.698268 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="653cf006-7306-4e5a-b70c-8454b8c47b2d" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.698459 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d90dc7-e101-4bde-b8b3-e3c13e788004" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.698476 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e89a553-dbd3-47d3-a187-c51aa149175c" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.698487 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="653cf006-7306-4e5a-b70c-8454b8c47b2d" containerName="mariadb-database-create" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.699001 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6610-account-create-zpq4j" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.701055 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.713523 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6610-account-create-zpq4j"] Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.897784 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drw4t\" (UniqueName: \"kubernetes.io/projected/57d1ad99-ea8d-4b51-bdf5-dfff27b0407d-kube-api-access-drw4t\") pod \"nova-api-6610-account-create-zpq4j\" (UID: \"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d\") " pod="openstack/nova-api-6610-account-create-zpq4j" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.906116 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-be58-account-create-cmqwf"] Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.908344 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be58-account-create-cmqwf"] Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.913212 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be58-account-create-cmqwf" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.921446 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 08 13:37:52 crc kubenswrapper[5065]: I1008 13:37:52.999710 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drw4t\" (UniqueName: \"kubernetes.io/projected/57d1ad99-ea8d-4b51-bdf5-dfff27b0407d-kube-api-access-drw4t\") pod \"nova-api-6610-account-create-zpq4j\" (UID: \"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d\") " pod="openstack/nova-api-6610-account-create-zpq4j" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.031317 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drw4t\" (UniqueName: \"kubernetes.io/projected/57d1ad99-ea8d-4b51-bdf5-dfff27b0407d-kube-api-access-drw4t\") pod \"nova-api-6610-account-create-zpq4j\" (UID: \"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d\") " pod="openstack/nova-api-6610-account-create-zpq4j" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.031946 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6610-account-create-zpq4j" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.102471 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzcr\" (UniqueName: \"kubernetes.io/projected/c63585b8-a023-4567-85f8-6c232acb89c2-kube-api-access-7gzcr\") pod \"nova-cell0-be58-account-create-cmqwf\" (UID: \"c63585b8-a023-4567-85f8-6c232acb89c2\") " pod="openstack/nova-cell0-be58-account-create-cmqwf" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.114054 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3b0c-account-create-k6csx"] Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.115263 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3b0c-account-create-k6csx" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.122805 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.137715 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3b0c-account-create-k6csx"] Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.204721 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/63f77331-9d91-4f35-b1b9-a9c3c68162ca-kube-api-access-n862k\") pod \"nova-cell1-3b0c-account-create-k6csx\" (UID: \"63f77331-9d91-4f35-b1b9-a9c3c68162ca\") " pod="openstack/nova-cell1-3b0c-account-create-k6csx" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.204849 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzcr\" (UniqueName: \"kubernetes.io/projected/c63585b8-a023-4567-85f8-6c232acb89c2-kube-api-access-7gzcr\") pod \"nova-cell0-be58-account-create-cmqwf\" (UID: \"c63585b8-a023-4567-85f8-6c232acb89c2\") " pod="openstack/nova-cell0-be58-account-create-cmqwf" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.222131 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzcr\" (UniqueName: \"kubernetes.io/projected/c63585b8-a023-4567-85f8-6c232acb89c2-kube-api-access-7gzcr\") pod \"nova-cell0-be58-account-create-cmqwf\" (UID: \"c63585b8-a023-4567-85f8-6c232acb89c2\") " pod="openstack/nova-cell0-be58-account-create-cmqwf" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.243605 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be58-account-create-cmqwf" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.305952 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/63f77331-9d91-4f35-b1b9-a9c3c68162ca-kube-api-access-n862k\") pod \"nova-cell1-3b0c-account-create-k6csx\" (UID: \"63f77331-9d91-4f35-b1b9-a9c3c68162ca\") " pod="openstack/nova-cell1-3b0c-account-create-k6csx" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.325617 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/63f77331-9d91-4f35-b1b9-a9c3c68162ca-kube-api-access-n862k\") pod \"nova-cell1-3b0c-account-create-k6csx\" (UID: \"63f77331-9d91-4f35-b1b9-a9c3c68162ca\") " pod="openstack/nova-cell1-3b0c-account-create-k6csx" Oct 08 13:37:53 crc kubenswrapper[5065]: I1008 13:37:53.470111 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3b0c-account-create-k6csx" Oct 08 13:37:54 crc kubenswrapper[5065]: I1008 13:37:54.378255 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 13:37:54 crc kubenswrapper[5065]: I1008 13:37:54.378919 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 13:37:54 crc kubenswrapper[5065]: I1008 13:37:54.410266 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 13:37:54 crc kubenswrapper[5065]: I1008 13:37:54.426073 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 13:37:55 crc kubenswrapper[5065]: E1008 13:37:55.051130 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a257f6_4b74_429b_9da0_b76051265822.slice/crio-a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158\": RecentStats: unable to find data in memory cache]" Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.166367 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.166756 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.383154 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.383432 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.431997 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.442518 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.664163 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6610-account-create-zpq4j"] Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.758701 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3b0c-account-create-k6csx"] Oct 08 13:37:55 crc kubenswrapper[5065]: I1008 13:37:55.769538 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be58-account-create-cmqwf"] Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.176295 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerStarted","Data":"9a1769ff289de605188a4eb6fffd06e2f5564a2a75d3ed24faf93b5a758dc47e"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.176382 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.178509 5065 generic.go:334] "Generic (PLEG): container finished" podID="63f77331-9d91-4f35-b1b9-a9c3c68162ca" containerID="ca9d0476322fc49313322ef95c7d2220e14767e0f7cd2dbdb105164851cc3638" exitCode=0 Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.178659 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3b0c-account-create-k6csx" event={"ID":"63f77331-9d91-4f35-b1b9-a9c3c68162ca","Type":"ContainerDied","Data":"ca9d0476322fc49313322ef95c7d2220e14767e0f7cd2dbdb105164851cc3638"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.178687 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3b0c-account-create-k6csx" event={"ID":"63f77331-9d91-4f35-b1b9-a9c3c68162ca","Type":"ContainerStarted","Data":"4943b0080c97edfef14bb812a6b966f825dc7634b6e3ec27c94ce6c940acc6e2"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.180586 5065 generic.go:334] "Generic (PLEG): container finished" podID="c63585b8-a023-4567-85f8-6c232acb89c2" containerID="705aeed5a07d4673f5ecaaa303c27972ce239afc15c12136ef9136b3fdea069f" exitCode=0 Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.180712 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be58-account-create-cmqwf" event={"ID":"c63585b8-a023-4567-85f8-6c232acb89c2","Type":"ContainerDied","Data":"705aeed5a07d4673f5ecaaa303c27972ce239afc15c12136ef9136b3fdea069f"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.180742 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be58-account-create-cmqwf" event={"ID":"c63585b8-a023-4567-85f8-6c232acb89c2","Type":"ContainerStarted","Data":"f00f5c260995db7ed2281978b83d925899d0ca83aeded8835b89f119eac4c6bc"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.182187 5065 generic.go:334] "Generic (PLEG): container finished" podID="57d1ad99-ea8d-4b51-bdf5-dfff27b0407d" containerID="25080b897bc4cb453bc113e3181f0a3b3ad3ee88b0edd40cc6b6bbfbfa6f2825" exitCode=0 Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.182252 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6610-account-create-zpq4j" event={"ID":"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d","Type":"ContainerDied","Data":"25080b897bc4cb453bc113e3181f0a3b3ad3ee88b0edd40cc6b6bbfbfa6f2825"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.182285 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6610-account-create-zpq4j" event={"ID":"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d","Type":"ContainerStarted","Data":"eee773c779a896f88dd89a36e270471e70ca25ffa520b444db5844c63bdce260"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.183568 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2136579b-6d89-48ff-b960-a0401eb9af4c","Type":"ContainerStarted","Data":"9464772db83ebfdeae74f66036b1b7d4c94daed370654f9e39056ee48ce23e40"} Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.184603 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.184636 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.211144 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.230747052 podStartE2EDuration="12.21111791s" podCreationTimestamp="2025-10-08 13:37:44 +0000 UTC" firstStartedPulling="2025-10-08 13:37:45.250087913 +0000 UTC m=+1167.027469670" lastFinishedPulling="2025-10-08 13:37:55.230458781 +0000 UTC m=+1177.007840528" observedRunningTime="2025-10-08 13:37:56.200400881 +0000 UTC m=+1177.977782638" watchObservedRunningTime="2025-10-08 13:37:56.21111791 +0000 UTC m=+1177.988499667" Oct 08 13:37:56 crc kubenswrapper[5065]: I1008 13:37:56.260734 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.033919568 podStartE2EDuration="23.260712745s" podCreationTimestamp="2025-10-08 13:37:33 +0000 UTC" firstStartedPulling="2025-10-08 13:37:34.00533975 +0000 UTC m=+1155.782721507" lastFinishedPulling="2025-10-08 13:37:55.232132927 +0000 UTC m=+1177.009514684" observedRunningTime="2025-10-08 13:37:56.254697047 +0000 UTC m=+1178.032078804" watchObservedRunningTime="2025-10-08 13:37:56.260712745 +0000 UTC m=+1178.038094502" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.187432 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.376860 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.377138 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.594086 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3b0c-account-create-k6csx" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.693328 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/63f77331-9d91-4f35-b1b9-a9c3c68162ca-kube-api-access-n862k\") pod \"63f77331-9d91-4f35-b1b9-a9c3c68162ca\" (UID: \"63f77331-9d91-4f35-b1b9-a9c3c68162ca\") " Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.698777 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f77331-9d91-4f35-b1b9-a9c3c68162ca-kube-api-access-n862k" (OuterVolumeSpecName: "kube-api-access-n862k") pod "63f77331-9d91-4f35-b1b9-a9c3c68162ca" (UID: "63f77331-9d91-4f35-b1b9-a9c3c68162ca"). InnerVolumeSpecName "kube-api-access-n862k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.792856 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6610-account-create-zpq4j" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.797756 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/63f77331-9d91-4f35-b1b9-a9c3c68162ca-kube-api-access-n862k\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.801680 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be58-account-create-cmqwf" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.851093 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.898680 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzcr\" (UniqueName: \"kubernetes.io/projected/c63585b8-a023-4567-85f8-6c232acb89c2-kube-api-access-7gzcr\") pod \"c63585b8-a023-4567-85f8-6c232acb89c2\" (UID: \"c63585b8-a023-4567-85f8-6c232acb89c2\") " Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.899059 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drw4t\" (UniqueName: \"kubernetes.io/projected/57d1ad99-ea8d-4b51-bdf5-dfff27b0407d-kube-api-access-drw4t\") pod \"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d\" (UID: \"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d\") " Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.912085 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63585b8-a023-4567-85f8-6c232acb89c2-kube-api-access-7gzcr" (OuterVolumeSpecName: "kube-api-access-7gzcr") pod "c63585b8-a023-4567-85f8-6c232acb89c2" (UID: "c63585b8-a023-4567-85f8-6c232acb89c2"). InnerVolumeSpecName "kube-api-access-7gzcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:57 crc kubenswrapper[5065]: I1008 13:37:57.912131 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d1ad99-ea8d-4b51-bdf5-dfff27b0407d-kube-api-access-drw4t" (OuterVolumeSpecName: "kube-api-access-drw4t") pod "57d1ad99-ea8d-4b51-bdf5-dfff27b0407d" (UID: "57d1ad99-ea8d-4b51-bdf5-dfff27b0407d"). InnerVolumeSpecName "kube-api-access-drw4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.001794 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drw4t\" (UniqueName: \"kubernetes.io/projected/57d1ad99-ea8d-4b51-bdf5-dfff27b0407d-kube-api-access-drw4t\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.001824 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzcr\" (UniqueName: \"kubernetes.io/projected/c63585b8-a023-4567-85f8-6c232acb89c2-kube-api-access-7gzcr\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.200740 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3b0c-account-create-k6csx" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.200757 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3b0c-account-create-k6csx" event={"ID":"63f77331-9d91-4f35-b1b9-a9c3c68162ca","Type":"ContainerDied","Data":"4943b0080c97edfef14bb812a6b966f825dc7634b6e3ec27c94ce6c940acc6e2"} Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.201934 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4943b0080c97edfef14bb812a6b966f825dc7634b6e3ec27c94ce6c940acc6e2" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.202728 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be58-account-create-cmqwf" event={"ID":"c63585b8-a023-4567-85f8-6c232acb89c2","Type":"ContainerDied","Data":"f00f5c260995db7ed2281978b83d925899d0ca83aeded8835b89f119eac4c6bc"} Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.202755 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f00f5c260995db7ed2281978b83d925899d0ca83aeded8835b89f119eac4c6bc" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.202814 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be58-account-create-cmqwf" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.205463 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6610-account-create-zpq4j" event={"ID":"57d1ad99-ea8d-4b51-bdf5-dfff27b0407d","Type":"ContainerDied","Data":"eee773c779a896f88dd89a36e270471e70ca25ffa520b444db5844c63bdce260"} Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.205571 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6610-account-create-zpq4j" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.205501 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.205673 5065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.205591 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee773c779a896f88dd89a36e270471e70ca25ffa520b444db5844c63bdce260" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.205879 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-central-agent" containerID="cri-o://b90f7cd2d54c2136944572dd894c19926d029a44064aecb999a6be5cc08c0876" gracePeriod=30 Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.206112 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="sg-core" containerID="cri-o://678c8eb1820f2d56e60243bea99de8ad2267d29eed0c6e57387f95fd6fdeb528" gracePeriod=30 Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.206127 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="proxy-httpd" containerID="cri-o://9a1769ff289de605188a4eb6fffd06e2f5564a2a75d3ed24faf93b5a758dc47e" gracePeriod=30 Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.206137 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-notification-agent" containerID="cri-o://ba3bdd3d5cf534782fdff003ee27a2389100b1d049abcf67d4714c43d809bfdb" gracePeriod=30 Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.483344 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:58 crc kubenswrapper[5065]: I1008 13:37:58.496382 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219188 5065 generic.go:334] "Generic (PLEG): container finished" podID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerID="9a1769ff289de605188a4eb6fffd06e2f5564a2a75d3ed24faf93b5a758dc47e" exitCode=0 Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219523 5065 generic.go:334] "Generic (PLEG): container finished" podID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerID="678c8eb1820f2d56e60243bea99de8ad2267d29eed0c6e57387f95fd6fdeb528" exitCode=2 Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219539 5065 generic.go:334] "Generic (PLEG): container finished" podID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerID="ba3bdd3d5cf534782fdff003ee27a2389100b1d049abcf67d4714c43d809bfdb" exitCode=0 Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219549 5065 generic.go:334] "Generic (PLEG): container finished" podID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerID="b90f7cd2d54c2136944572dd894c19926d029a44064aecb999a6be5cc08c0876" exitCode=0 Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219822 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerDied","Data":"9a1769ff289de605188a4eb6fffd06e2f5564a2a75d3ed24faf93b5a758dc47e"} Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219862 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerDied","Data":"678c8eb1820f2d56e60243bea99de8ad2267d29eed0c6e57387f95fd6fdeb528"} Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219878 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerDied","Data":"ba3bdd3d5cf534782fdff003ee27a2389100b1d049abcf67d4714c43d809bfdb"} Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.219890 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerDied","Data":"b90f7cd2d54c2136944572dd894c19926d029a44064aecb999a6be5cc08c0876"} Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.552705 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.633007 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-combined-ca-bundle\") pod \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.633082 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzl9j\" (UniqueName: \"kubernetes.io/projected/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-kube-api-access-kzl9j\") pod \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.633172 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-config-data\") pod \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.633189 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-run-httpd\") pod \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.633251 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-sg-core-conf-yaml\") pod \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.633314 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-scripts\") pod \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.633335 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-log-httpd\") pod \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\" (UID: \"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1\") " Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.634184 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" (UID: "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.635905 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" (UID: "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.640626 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-scripts" (OuterVolumeSpecName: "scripts") pod "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" (UID: "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.649048 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-kube-api-access-kzl9j" (OuterVolumeSpecName: "kube-api-access-kzl9j") pod "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" (UID: "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1"). InnerVolumeSpecName "kube-api-access-kzl9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.679624 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" (UID: "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.727302 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" (UID: "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.735425 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.735449 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzl9j\" (UniqueName: \"kubernetes.io/projected/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-kube-api-access-kzl9j\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.735460 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.735468 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.735478 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.735488 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.763859 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-config-data" (OuterVolumeSpecName: "config-data") pod "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" (UID: "4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:37:59 crc kubenswrapper[5065]: I1008 13:37:59.837061 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.232190 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1","Type":"ContainerDied","Data":"037a6b37694963b7a5760a0ad035c6e193856505bf7691d3ab3c0b42717d8c62"} Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.232228 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.232264 5065 scope.go:117] "RemoveContainer" containerID="9a1769ff289de605188a4eb6fffd06e2f5564a2a75d3ed24faf93b5a758dc47e" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.254880 5065 scope.go:117] "RemoveContainer" containerID="678c8eb1820f2d56e60243bea99de8ad2267d29eed0c6e57387f95fd6fdeb528" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.271865 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.280597 5065 scope.go:117] "RemoveContainer" containerID="ba3bdd3d5cf534782fdff003ee27a2389100b1d049abcf67d4714c43d809bfdb" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.287511 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295478 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:00 crc kubenswrapper[5065]: E1008 13:38:00.295848 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="proxy-httpd" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295869 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="proxy-httpd" Oct 08 13:38:00 crc kubenswrapper[5065]: E1008 13:38:00.295883 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d1ad99-ea8d-4b51-bdf5-dfff27b0407d" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295891 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d1ad99-ea8d-4b51-bdf5-dfff27b0407d" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: E1008 13:38:00.295906 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="sg-core" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295913 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="sg-core" Oct 08 13:38:00 crc kubenswrapper[5065]: E1008 13:38:00.295928 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63585b8-a023-4567-85f8-6c232acb89c2" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295934 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63585b8-a023-4567-85f8-6c232acb89c2" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: E1008 13:38:00.295950 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f77331-9d91-4f35-b1b9-a9c3c68162ca" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295956 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f77331-9d91-4f35-b1b9-a9c3c68162ca" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: E1008 13:38:00.295973 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-central-agent" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295978 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-central-agent" Oct 08 13:38:00 crc kubenswrapper[5065]: E1008 13:38:00.295991 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-notification-agent" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.295996 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-notification-agent" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.296146 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63585b8-a023-4567-85f8-6c232acb89c2" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.296158 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="sg-core" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.296166 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f77331-9d91-4f35-b1b9-a9c3c68162ca" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.296177 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-notification-agent" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.296186 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d1ad99-ea8d-4b51-bdf5-dfff27b0407d" containerName="mariadb-account-create" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.296200 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="ceilometer-central-agent" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.296215 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" containerName="proxy-httpd" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.297785 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.300249 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.300782 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.321582 5065 scope.go:117] "RemoveContainer" containerID="b90f7cd2d54c2136944572dd894c19926d029a44064aecb999a6be5cc08c0876" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.326931 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.449248 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.449917 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-config-data\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.450001 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-scripts\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.450088 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-log-httpd\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.450306 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.450489 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-run-httpd\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.450562 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-kube-api-access-cc2sq\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.552987 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-log-httpd\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553062 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553116 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-run-httpd\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553149 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-kube-api-access-cc2sq\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553297 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553328 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-config-data\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553369 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-scripts\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553440 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-log-httpd\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.553585 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-run-httpd\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.561714 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-config-data\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.562304 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.563076 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-scripts\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.572377 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.575055 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-kube-api-access-cc2sq\") pod \"ceilometer-0\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.632966 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:00 crc kubenswrapper[5065]: I1008 13:38:00.886813 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1" path="/var/lib/kubelet/pods/4c2f7f80-5d4e-47f8-a0d5-ab8bdfe6eff1/volumes" Oct 08 13:38:01 crc kubenswrapper[5065]: I1008 13:38:01.097844 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:01 crc kubenswrapper[5065]: I1008 13:38:01.243549 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerStarted","Data":"ddb9452c56de578cde4f2e3698fac15e929d66cb8a782e3f7024b9d1a2a6141a"} Oct 08 13:38:01 crc kubenswrapper[5065]: I1008 13:38:01.367141 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.142622 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ns8wd"] Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.144577 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.148671 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.149164 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rrzqm" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.149180 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.158157 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ns8wd"] Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.261867 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerStarted","Data":"dd7d1d756255801153ceda68a58971ef76b30fb2037ff80e5e87bbd0f7e89568"} Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.311378 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-scripts\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.311503 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-config-data\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.311573 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrk22\" (UniqueName: \"kubernetes.io/projected/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-kube-api-access-qrk22\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.311607 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.413493 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-config-data\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.413575 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrk22\" (UniqueName: \"kubernetes.io/projected/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-kube-api-access-qrk22\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.413602 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.413685 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-scripts\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.420157 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.422934 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-config-data\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.431796 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-scripts\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.432090 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrk22\" (UniqueName: \"kubernetes.io/projected/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-kube-api-access-qrk22\") pod \"nova-cell0-conductor-db-sync-ns8wd\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:03 crc kubenswrapper[5065]: I1008 13:38:03.463944 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:04 crc kubenswrapper[5065]: I1008 13:38:04.053244 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ns8wd"] Oct 08 13:38:04 crc kubenswrapper[5065]: W1008 13:38:04.060393 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ca7fdfe_89ed_41bb_a9cb_919a501afeb3.slice/crio-82c7d34db9c9e10929e72c40c11a651870378cb67d25efbce308fb76105f83a8 WatchSource:0}: Error finding container 82c7d34db9c9e10929e72c40c11a651870378cb67d25efbce308fb76105f83a8: Status 404 returned error can't find the container with id 82c7d34db9c9e10929e72c40c11a651870378cb67d25efbce308fb76105f83a8 Oct 08 13:38:04 crc kubenswrapper[5065]: I1008 13:38:04.272122 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" event={"ID":"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3","Type":"ContainerStarted","Data":"82c7d34db9c9e10929e72c40c11a651870378cb67d25efbce308fb76105f83a8"} Oct 08 13:38:05 crc kubenswrapper[5065]: I1008 13:38:05.283878 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerStarted","Data":"98ddb317a94821c5f6730cda7692ac667d579543f81c6a04c1be74cd649f8c48"} Oct 08 13:38:05 crc kubenswrapper[5065]: I1008 13:38:05.284173 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerStarted","Data":"9ccfe4033d06c89a67e2b49e2fdf1b36dc9d692ea6c87f755229d722db5b2b2a"} Oct 08 13:38:05 crc kubenswrapper[5065]: E1008 13:38:05.391760 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a257f6_4b74_429b_9da0_b76051265822.slice/crio-a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158\": RecentStats: unable to find data in memory cache]" Oct 08 13:38:07 crc kubenswrapper[5065]: I1008 13:38:07.315252 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerStarted","Data":"38c018acce185ee49e6beb235d03caf2a79a5e550ee5bd25c921c3043d7696bf"} Oct 08 13:38:07 crc kubenswrapper[5065]: I1008 13:38:07.315936 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-central-agent" containerID="cri-o://dd7d1d756255801153ceda68a58971ef76b30fb2037ff80e5e87bbd0f7e89568" gracePeriod=30 Oct 08 13:38:07 crc kubenswrapper[5065]: I1008 13:38:07.316015 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:38:07 crc kubenswrapper[5065]: I1008 13:38:07.316361 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="proxy-httpd" containerID="cri-o://38c018acce185ee49e6beb235d03caf2a79a5e550ee5bd25c921c3043d7696bf" gracePeriod=30 Oct 08 13:38:07 crc kubenswrapper[5065]: I1008 13:38:07.316405 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="sg-core" containerID="cri-o://98ddb317a94821c5f6730cda7692ac667d579543f81c6a04c1be74cd649f8c48" gracePeriod=30 Oct 08 13:38:07 crc kubenswrapper[5065]: I1008 13:38:07.316455 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-notification-agent" containerID="cri-o://9ccfe4033d06c89a67e2b49e2fdf1b36dc9d692ea6c87f755229d722db5b2b2a" gracePeriod=30 Oct 08 13:38:07 crc kubenswrapper[5065]: I1008 13:38:07.337177 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9322239030000001 podStartE2EDuration="7.337141516s" podCreationTimestamp="2025-10-08 13:38:00 +0000 UTC" firstStartedPulling="2025-10-08 13:38:01.099346411 +0000 UTC m=+1182.876728168" lastFinishedPulling="2025-10-08 13:38:06.504264024 +0000 UTC m=+1188.281645781" observedRunningTime="2025-10-08 13:38:07.334752449 +0000 UTC m=+1189.112134216" watchObservedRunningTime="2025-10-08 13:38:07.337141516 +0000 UTC m=+1189.114523273" Oct 08 13:38:08 crc kubenswrapper[5065]: I1008 13:38:08.327580 5065 generic.go:334] "Generic (PLEG): container finished" podID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerID="38c018acce185ee49e6beb235d03caf2a79a5e550ee5bd25c921c3043d7696bf" exitCode=0 Oct 08 13:38:08 crc kubenswrapper[5065]: I1008 13:38:08.327917 5065 generic.go:334] "Generic (PLEG): container finished" podID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerID="98ddb317a94821c5f6730cda7692ac667d579543f81c6a04c1be74cd649f8c48" exitCode=2 Oct 08 13:38:08 crc kubenswrapper[5065]: I1008 13:38:08.327931 5065 generic.go:334] "Generic (PLEG): container finished" podID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerID="9ccfe4033d06c89a67e2b49e2fdf1b36dc9d692ea6c87f755229d722db5b2b2a" exitCode=0 Oct 08 13:38:08 crc kubenswrapper[5065]: I1008 13:38:08.327750 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerDied","Data":"38c018acce185ee49e6beb235d03caf2a79a5e550ee5bd25c921c3043d7696bf"} Oct 08 13:38:08 crc kubenswrapper[5065]: I1008 13:38:08.327969 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerDied","Data":"98ddb317a94821c5f6730cda7692ac667d579543f81c6a04c1be74cd649f8c48"} Oct 08 13:38:08 crc kubenswrapper[5065]: I1008 13:38:08.327985 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerDied","Data":"9ccfe4033d06c89a67e2b49e2fdf1b36dc9d692ea6c87f755229d722db5b2b2a"} Oct 08 13:38:09 crc kubenswrapper[5065]: I1008 13:38:09.338078 5065 generic.go:334] "Generic (PLEG): container finished" podID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerID="dd7d1d756255801153ceda68a58971ef76b30fb2037ff80e5e87bbd0f7e89568" exitCode=0 Oct 08 13:38:09 crc kubenswrapper[5065]: I1008 13:38:09.338120 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerDied","Data":"dd7d1d756255801153ceda68a58971ef76b30fb2037ff80e5e87bbd0f7e89568"} Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.039099 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.183828 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-log-httpd\") pod \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.183931 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-kube-api-access-cc2sq\") pod \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.184265 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" (UID: "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.184897 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-scripts\") pod \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.184935 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-combined-ca-bundle\") pod \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.185025 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-sg-core-conf-yaml\") pod \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.185184 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-config-data\") pod \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.185269 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-run-httpd\") pod \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\" (UID: \"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656\") " Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.185955 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.186268 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" (UID: "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.191347 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-kube-api-access-cc2sq" (OuterVolumeSpecName: "kube-api-access-cc2sq") pod "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" (UID: "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656"). InnerVolumeSpecName "kube-api-access-cc2sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.191447 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-scripts" (OuterVolumeSpecName: "scripts") pod "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" (UID: "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.213040 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" (UID: "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.263969 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" (UID: "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.286599 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.286631 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc2sq\" (UniqueName: \"kubernetes.io/projected/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-kube-api-access-cc2sq\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.286642 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.286650 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.286659 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.306293 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-config-data" (OuterVolumeSpecName: "config-data") pod "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" (UID: "69e3d4f0-8cb3-4893-b1de-ca05ed9eb656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.385139 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.385132 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69e3d4f0-8cb3-4893-b1de-ca05ed9eb656","Type":"ContainerDied","Data":"ddb9452c56de578cde4f2e3698fac15e929d66cb8a782e3f7024b9d1a2a6141a"} Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.385665 5065 scope.go:117] "RemoveContainer" containerID="38c018acce185ee49e6beb235d03caf2a79a5e550ee5bd25c921c3043d7696bf" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.386874 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" event={"ID":"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3","Type":"ContainerStarted","Data":"83fabf1eea9a4b46559df8e9fbd02592791da0d4384f0b4cfafb802cf3572c8d"} Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.389602 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.411265 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" podStartSLOduration=1.67715344 podStartE2EDuration="11.411243689s" podCreationTimestamp="2025-10-08 13:38:03 +0000 UTC" firstStartedPulling="2025-10-08 13:38:04.064693375 +0000 UTC m=+1185.842075132" lastFinishedPulling="2025-10-08 13:38:13.798783624 +0000 UTC m=+1195.576165381" observedRunningTime="2025-10-08 13:38:14.403010359 +0000 UTC m=+1196.180392136" watchObservedRunningTime="2025-10-08 13:38:14.411243689 +0000 UTC m=+1196.188625456" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.411976 5065 scope.go:117] "RemoveContainer" containerID="98ddb317a94821c5f6730cda7692ac667d579543f81c6a04c1be74cd649f8c48" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.443975 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.458717 5065 scope.go:117] "RemoveContainer" containerID="9ccfe4033d06c89a67e2b49e2fdf1b36dc9d692ea6c87f755229d722db5b2b2a" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.460647 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.472591 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:14 crc kubenswrapper[5065]: E1008 13:38:14.473196 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-notification-agent" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.473278 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-notification-agent" Oct 08 13:38:14 crc kubenswrapper[5065]: E1008 13:38:14.473395 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-central-agent" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.473492 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-central-agent" Oct 08 13:38:14 crc kubenswrapper[5065]: E1008 13:38:14.473579 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="sg-core" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.473657 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="sg-core" Oct 08 13:38:14 crc kubenswrapper[5065]: E1008 13:38:14.473741 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="proxy-httpd" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.473799 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="proxy-httpd" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.474030 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-central-agent" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.474100 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="sg-core" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.474166 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="ceilometer-notification-agent" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.474226 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" containerName="proxy-httpd" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.475860 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.478967 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.479602 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.480726 5065 scope.go:117] "RemoveContainer" containerID="dd7d1d756255801153ceda68a58971ef76b30fb2037ff80e5e87bbd0f7e89568" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.483780 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.594490 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-run-httpd\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.594540 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-config-data\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.594600 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm5m2\" (UniqueName: \"kubernetes.io/projected/3cc02855-efea-4a87-a458-3b1a5d163ae3-kube-api-access-lm5m2\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.594750 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-log-httpd\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.594821 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-scripts\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.594876 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.595547 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.698282 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-log-httpd\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.698821 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-log-httpd\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.698949 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-scripts\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.699087 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.699972 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.700266 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-run-httpd\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.700917 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-config-data\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.700694 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-run-httpd\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.701195 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm5m2\" (UniqueName: \"kubernetes.io/projected/3cc02855-efea-4a87-a458-3b1a5d163ae3-kube-api-access-lm5m2\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.704719 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.704914 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-config-data\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.704976 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-scripts\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.718862 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.720579 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm5m2\" (UniqueName: \"kubernetes.io/projected/3cc02855-efea-4a87-a458-3b1a5d163ae3-kube-api-access-lm5m2\") pod \"ceilometer-0\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.805993 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.894599 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e3d4f0-8cb3-4893-b1de-ca05ed9eb656" path="/var/lib/kubelet/pods/69e3d4f0-8cb3-4893-b1de-ca05ed9eb656/volumes" Oct 08 13:38:14 crc kubenswrapper[5065]: I1008 13:38:14.900914 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:15 crc kubenswrapper[5065]: I1008 13:38:15.331476 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:15 crc kubenswrapper[5065]: W1008 13:38:15.350124 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc02855_efea_4a87_a458_3b1a5d163ae3.slice/crio-e2b0da0600132e4ad7a4b22c573677e075b2b68824b32e4b46a67a9e03e0bc13 WatchSource:0}: Error finding container e2b0da0600132e4ad7a4b22c573677e075b2b68824b32e4b46a67a9e03e0bc13: Status 404 returned error can't find the container with id e2b0da0600132e4ad7a4b22c573677e075b2b68824b32e4b46a67a9e03e0bc13 Oct 08 13:38:15 crc kubenswrapper[5065]: I1008 13:38:15.399125 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerStarted","Data":"e2b0da0600132e4ad7a4b22c573677e075b2b68824b32e4b46a67a9e03e0bc13"} Oct 08 13:38:15 crc kubenswrapper[5065]: E1008 13:38:15.634021 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a257f6_4b74_429b_9da0_b76051265822.slice/crio-a89c20da0276a3947aed1ced43898875afc887bdfcfa04aee8f56a3d061dd158\": RecentStats: unable to find data in memory cache]" Oct 08 13:38:17 crc kubenswrapper[5065]: I1008 13:38:17.418079 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerStarted","Data":"fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692"} Oct 08 13:38:18 crc kubenswrapper[5065]: I1008 13:38:18.428711 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerStarted","Data":"a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a"} Oct 08 13:38:19 crc kubenswrapper[5065]: I1008 13:38:19.439705 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerStarted","Data":"647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad"} Oct 08 13:38:21 crc kubenswrapper[5065]: I1008 13:38:21.463716 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerStarted","Data":"ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649"} Oct 08 13:38:21 crc kubenswrapper[5065]: I1008 13:38:21.464046 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-central-agent" containerID="cri-o://fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692" gracePeriod=30 Oct 08 13:38:21 crc kubenswrapper[5065]: I1008 13:38:21.464433 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="sg-core" containerID="cri-o://647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad" gracePeriod=30 Oct 08 13:38:21 crc kubenswrapper[5065]: I1008 13:38:21.464484 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:38:21 crc kubenswrapper[5065]: I1008 13:38:21.464482 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-notification-agent" containerID="cri-o://a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a" gracePeriod=30 Oct 08 13:38:21 crc kubenswrapper[5065]: I1008 13:38:21.464410 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="proxy-httpd" containerID="cri-o://ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649" gracePeriod=30 Oct 08 13:38:21 crc kubenswrapper[5065]: I1008 13:38:21.504675 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.163032567 podStartE2EDuration="7.504656692s" podCreationTimestamp="2025-10-08 13:38:14 +0000 UTC" firstStartedPulling="2025-10-08 13:38:15.362836398 +0000 UTC m=+1197.140218155" lastFinishedPulling="2025-10-08 13:38:20.704460523 +0000 UTC m=+1202.481842280" observedRunningTime="2025-10-08 13:38:21.497642156 +0000 UTC m=+1203.275023923" watchObservedRunningTime="2025-10-08 13:38:21.504656692 +0000 UTC m=+1203.282038459" Oct 08 13:38:22 crc kubenswrapper[5065]: I1008 13:38:22.477851 5065 generic.go:334] "Generic (PLEG): container finished" podID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerID="ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649" exitCode=0 Oct 08 13:38:22 crc kubenswrapper[5065]: I1008 13:38:22.477883 5065 generic.go:334] "Generic (PLEG): container finished" podID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerID="647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad" exitCode=2 Oct 08 13:38:22 crc kubenswrapper[5065]: I1008 13:38:22.477890 5065 generic.go:334] "Generic (PLEG): container finished" podID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerID="a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a" exitCode=0 Oct 08 13:38:22 crc kubenswrapper[5065]: I1008 13:38:22.477909 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerDied","Data":"ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649"} Oct 08 13:38:22 crc kubenswrapper[5065]: I1008 13:38:22.477933 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerDied","Data":"647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad"} Oct 08 13:38:22 crc kubenswrapper[5065]: I1008 13:38:22.477943 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerDied","Data":"a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a"} Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.959730 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.983139 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm5m2\" (UniqueName: \"kubernetes.io/projected/3cc02855-efea-4a87-a458-3b1a5d163ae3-kube-api-access-lm5m2\") pod \"3cc02855-efea-4a87-a458-3b1a5d163ae3\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.983230 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-scripts\") pod \"3cc02855-efea-4a87-a458-3b1a5d163ae3\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.983273 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-log-httpd\") pod \"3cc02855-efea-4a87-a458-3b1a5d163ae3\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.983320 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-config-data\") pod \"3cc02855-efea-4a87-a458-3b1a5d163ae3\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.983360 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-run-httpd\") pod \"3cc02855-efea-4a87-a458-3b1a5d163ae3\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.983406 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-combined-ca-bundle\") pod \"3cc02855-efea-4a87-a458-3b1a5d163ae3\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.983472 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-sg-core-conf-yaml\") pod \"3cc02855-efea-4a87-a458-3b1a5d163ae3\" (UID: \"3cc02855-efea-4a87-a458-3b1a5d163ae3\") " Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.984067 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3cc02855-efea-4a87-a458-3b1a5d163ae3" (UID: "3cc02855-efea-4a87-a458-3b1a5d163ae3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.985857 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3cc02855-efea-4a87-a458-3b1a5d163ae3" (UID: "3cc02855-efea-4a87-a458-3b1a5d163ae3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:38:26 crc kubenswrapper[5065]: I1008 13:38:26.989509 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-scripts" (OuterVolumeSpecName: "scripts") pod "3cc02855-efea-4a87-a458-3b1a5d163ae3" (UID: "3cc02855-efea-4a87-a458-3b1a5d163ae3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.000659 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc02855-efea-4a87-a458-3b1a5d163ae3-kube-api-access-lm5m2" (OuterVolumeSpecName: "kube-api-access-lm5m2") pod "3cc02855-efea-4a87-a458-3b1a5d163ae3" (UID: "3cc02855-efea-4a87-a458-3b1a5d163ae3"). InnerVolumeSpecName "kube-api-access-lm5m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.033981 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3cc02855-efea-4a87-a458-3b1a5d163ae3" (UID: "3cc02855-efea-4a87-a458-3b1a5d163ae3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.067304 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cc02855-efea-4a87-a458-3b1a5d163ae3" (UID: "3cc02855-efea-4a87-a458-3b1a5d163ae3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.086052 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.086092 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.086109 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc02855-efea-4a87-a458-3b1a5d163ae3-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.086120 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.086132 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.086143 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm5m2\" (UniqueName: \"kubernetes.io/projected/3cc02855-efea-4a87-a458-3b1a5d163ae3-kube-api-access-lm5m2\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.088845 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-config-data" (OuterVolumeSpecName: "config-data") pod "3cc02855-efea-4a87-a458-3b1a5d163ae3" (UID: "3cc02855-efea-4a87-a458-3b1a5d163ae3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.187652 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc02855-efea-4a87-a458-3b1a5d163ae3-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.540933 5065 generic.go:334] "Generic (PLEG): container finished" podID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerID="fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692" exitCode=0 Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.540975 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerDied","Data":"fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692"} Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.541001 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc02855-efea-4a87-a458-3b1a5d163ae3","Type":"ContainerDied","Data":"e2b0da0600132e4ad7a4b22c573677e075b2b68824b32e4b46a67a9e03e0bc13"} Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.541021 5065 scope.go:117] "RemoveContainer" containerID="ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.541052 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.569672 5065 scope.go:117] "RemoveContainer" containerID="647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.576687 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.587784 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.604957 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.605397 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-central-agent" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.605429 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-central-agent" Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.605453 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="proxy-httpd" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.605461 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="proxy-httpd" Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.605474 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-notification-agent" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.605482 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-notification-agent" Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.605491 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="sg-core" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.605498 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="sg-core" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.606441 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-notification-agent" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.606464 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="sg-core" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.606483 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="ceilometer-central-agent" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.606501 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" containerName="proxy-httpd" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.610149 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.611700 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.612321 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.689094 5065 scope.go:117] "RemoveContainer" containerID="a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.691873 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.696676 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.696764 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.696821 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.696897 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.696927 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xllrl\" (UniqueName: \"kubernetes.io/projected/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-kube-api-access-xllrl\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.697084 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-config-data\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.697161 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-scripts\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.709486 5065 scope.go:117] "RemoveContainer" containerID="fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.726230 5065 scope.go:117] "RemoveContainer" containerID="ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649" Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.726720 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649\": container with ID starting with ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649 not found: ID does not exist" containerID="ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.726761 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649"} err="failed to get container status \"ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649\": rpc error: code = NotFound desc = could not find container \"ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649\": container with ID starting with ed57e79a9394b02f29905315423c15ad9c040b7179c9f92b47ada952bfbad649 not found: ID does not exist" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.726790 5065 scope.go:117] "RemoveContainer" containerID="647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad" Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.727252 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad\": container with ID starting with 647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad not found: ID does not exist" containerID="647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.727319 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad"} err="failed to get container status \"647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad\": rpc error: code = NotFound desc = could not find container \"647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad\": container with ID starting with 647eab6d73ec723c1ccd0913799013ec7971d5eea4c179af94e07a6b79438dad not found: ID does not exist" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.727346 5065 scope.go:117] "RemoveContainer" containerID="a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a" Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.727780 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a\": container with ID starting with a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a not found: ID does not exist" containerID="a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.727820 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a"} err="failed to get container status \"a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a\": rpc error: code = NotFound desc = could not find container \"a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a\": container with ID starting with a34ec7a496ffc3441ee42b0e17eabf051f4a701f17514e0a91f3befcf2c5821a not found: ID does not exist" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.727846 5065 scope.go:117] "RemoveContainer" containerID="fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692" Oct 08 13:38:27 crc kubenswrapper[5065]: E1008 13:38:27.728051 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692\": container with ID starting with fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692 not found: ID does not exist" containerID="fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.728068 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692"} err="failed to get container status \"fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692\": rpc error: code = NotFound desc = could not find container \"fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692\": container with ID starting with fecfc6e31c206bbe912ac5c8b8264c94bf13c364b495bb0201aed80da5711692 not found: ID does not exist" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.798923 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.798977 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xllrl\" (UniqueName: \"kubernetes.io/projected/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-kube-api-access-xllrl\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.799056 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-config-data\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.799099 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-scripts\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.799149 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.799193 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.799234 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.799550 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.799765 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.803853 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.803928 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-config-data\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.804172 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-scripts\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.805444 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.830848 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xllrl\" (UniqueName: \"kubernetes.io/projected/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-kube-api-access-xllrl\") pod \"ceilometer-0\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " pod="openstack/ceilometer-0" Oct 08 13:38:27 crc kubenswrapper[5065]: I1008 13:38:27.988749 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:38:28 crc kubenswrapper[5065]: I1008 13:38:28.460090 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:38:28 crc kubenswrapper[5065]: I1008 13:38:28.550801 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerStarted","Data":"3c1c250c587e0e41773b9925f65a735adc5b6716370192b774caef515b254207"} Oct 08 13:38:28 crc kubenswrapper[5065]: I1008 13:38:28.554431 5065 generic.go:334] "Generic (PLEG): container finished" podID="1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" containerID="83fabf1eea9a4b46559df8e9fbd02592791da0d4384f0b4cfafb802cf3572c8d" exitCode=0 Oct 08 13:38:28 crc kubenswrapper[5065]: I1008 13:38:28.554505 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" event={"ID":"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3","Type":"ContainerDied","Data":"83fabf1eea9a4b46559df8e9fbd02592791da0d4384f0b4cfafb802cf3572c8d"} Oct 08 13:38:28 crc kubenswrapper[5065]: I1008 13:38:28.889166 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc02855-efea-4a87-a458-3b1a5d163ae3" path="/var/lib/kubelet/pods/3cc02855-efea-4a87-a458-3b1a5d163ae3/volumes" Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.574768 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerStarted","Data":"ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa"} Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.910090 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.945009 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-combined-ca-bundle\") pod \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.945074 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-config-data\") pod \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.945110 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-scripts\") pod \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.945201 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrk22\" (UniqueName: \"kubernetes.io/projected/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-kube-api-access-qrk22\") pod \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\" (UID: \"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3\") " Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.951433 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-scripts" (OuterVolumeSpecName: "scripts") pod "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" (UID: "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.951583 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-kube-api-access-qrk22" (OuterVolumeSpecName: "kube-api-access-qrk22") pod "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" (UID: "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3"). InnerVolumeSpecName "kube-api-access-qrk22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.973659 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" (UID: "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:29 crc kubenswrapper[5065]: I1008 13:38:29.977191 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-config-data" (OuterVolumeSpecName: "config-data") pod "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" (UID: "1ca7fdfe-89ed-41bb-a9cb-919a501afeb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.046612 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.046650 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.046659 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.046668 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrk22\" (UniqueName: \"kubernetes.io/projected/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3-kube-api-access-qrk22\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.587234 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" event={"ID":"1ca7fdfe-89ed-41bb-a9cb-919a501afeb3","Type":"ContainerDied","Data":"82c7d34db9c9e10929e72c40c11a651870378cb67d25efbce308fb76105f83a8"} Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.587592 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82c7d34db9c9e10929e72c40c11a651870378cb67d25efbce308fb76105f83a8" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.587323 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ns8wd" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.599794 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerStarted","Data":"eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68"} Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.795282 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 13:38:30 crc kubenswrapper[5065]: E1008 13:38:30.795819 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" containerName="nova-cell0-conductor-db-sync" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.795847 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" containerName="nova-cell0-conductor-db-sync" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.796092 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" containerName="nova-cell0-conductor-db-sync" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.796876 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.804244 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.806698 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rrzqm" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.807151 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.862338 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/84d28af9-b1bc-4475-abc6-9c33380349e9-kube-api-access-qqflt\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.862476 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.862582 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.964756 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/84d28af9-b1bc-4475-abc6-9c33380349e9-kube-api-access-qqflt\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.965106 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.965223 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.974335 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.983533 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/84d28af9-b1bc-4475-abc6-9c33380349e9-kube-api-access-qqflt\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:30 crc kubenswrapper[5065]: I1008 13:38:30.985170 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:31 crc kubenswrapper[5065]: I1008 13:38:31.251632 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:31 crc kubenswrapper[5065]: I1008 13:38:31.612236 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerStarted","Data":"61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639"} Oct 08 13:38:31 crc kubenswrapper[5065]: I1008 13:38:31.748505 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 13:38:31 crc kubenswrapper[5065]: W1008 13:38:31.752261 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84d28af9_b1bc_4475_abc6_9c33380349e9.slice/crio-ee52a3178beae32b84f9e8f28818f8f7db3d6bb3530336aff8001369a38af345 WatchSource:0}: Error finding container ee52a3178beae32b84f9e8f28818f8f7db3d6bb3530336aff8001369a38af345: Status 404 returned error can't find the container with id ee52a3178beae32b84f9e8f28818f8f7db3d6bb3530336aff8001369a38af345 Oct 08 13:38:32 crc kubenswrapper[5065]: I1008 13:38:32.643849 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"84d28af9-b1bc-4475-abc6-9c33380349e9","Type":"ContainerStarted","Data":"5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41"} Oct 08 13:38:32 crc kubenswrapper[5065]: I1008 13:38:32.644221 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:32 crc kubenswrapper[5065]: I1008 13:38:32.644239 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"84d28af9-b1bc-4475-abc6-9c33380349e9","Type":"ContainerStarted","Data":"ee52a3178beae32b84f9e8f28818f8f7db3d6bb3530336aff8001369a38af345"} Oct 08 13:38:32 crc kubenswrapper[5065]: I1008 13:38:32.666119 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.666100807 podStartE2EDuration="2.666100807s" podCreationTimestamp="2025-10-08 13:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:38:32.661034876 +0000 UTC m=+1214.438416633" watchObservedRunningTime="2025-10-08 13:38:32.666100807 +0000 UTC m=+1214.443482564" Oct 08 13:38:33 crc kubenswrapper[5065]: I1008 13:38:33.654402 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerStarted","Data":"15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908"} Oct 08 13:38:33 crc kubenswrapper[5065]: I1008 13:38:33.687720 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.563442489 podStartE2EDuration="6.687698942s" podCreationTimestamp="2025-10-08 13:38:27 +0000 UTC" firstStartedPulling="2025-10-08 13:38:28.46256962 +0000 UTC m=+1210.239951377" lastFinishedPulling="2025-10-08 13:38:32.586826073 +0000 UTC m=+1214.364207830" observedRunningTime="2025-10-08 13:38:33.676047397 +0000 UTC m=+1215.453429154" watchObservedRunningTime="2025-10-08 13:38:33.687698942 +0000 UTC m=+1215.465080699" Oct 08 13:38:34 crc kubenswrapper[5065]: I1008 13:38:34.663107 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.288894 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.815836 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mr9ph"] Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.816912 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.821391 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.823544 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr9ph"] Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.824263 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.889134 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-scripts\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.889447 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.889571 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twptb\" (UniqueName: \"kubernetes.io/projected/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-kube-api-access-twptb\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.889678 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-config-data\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.961184 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.963572 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.966363 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.989070 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.993543 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-scripts\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.993636 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.993712 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twptb\" (UniqueName: \"kubernetes.io/projected/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-kube-api-access-twptb\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:36 crc kubenswrapper[5065]: I1008 13:38:36.993753 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-config-data\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.008172 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.008474 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-config-data\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.009120 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.009259 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.024947 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-scripts\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.026560 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.030156 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twptb\" (UniqueName: \"kubernetes.io/projected/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-kube-api-access-twptb\") pod \"nova-cell0-cell-mapping-mr9ph\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.057500 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.096206 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-config-data\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.096887 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.096968 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xlt\" (UniqueName: \"kubernetes.io/projected/65c8282a-ff19-4f87-bd59-164bc28dc720-kube-api-access-95xlt\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.097055 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-logs\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.097176 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.097289 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-config-data\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.097433 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5j49\" (UniqueName: \"kubernetes.io/projected/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-kube-api-access-t5j49\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.104682 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.106082 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.111882 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.144474 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.155182 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.186027 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-nzm7z"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.187492 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.201204 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-nzm7z"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218504 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5j49\" (UniqueName: \"kubernetes.io/projected/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-kube-api-access-t5j49\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218603 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9s45\" (UniqueName: \"kubernetes.io/projected/1d196d62-fd18-4261-9fc8-1f23060a4848-kube-api-access-b9s45\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218744 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-config-data\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218805 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218838 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95xlt\" (UniqueName: \"kubernetes.io/projected/65c8282a-ff19-4f87-bd59-164bc28dc720-kube-api-access-95xlt\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218869 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-logs\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218951 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-config-data\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.218978 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.219022 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d196d62-fd18-4261-9fc8-1f23060a4848-logs\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.219055 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.219074 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-config-data\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.224089 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-logs\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.228280 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-config-data\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.228918 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.245191 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.245683 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-config-data\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.251274 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95xlt\" (UniqueName: \"kubernetes.io/projected/65c8282a-ff19-4f87-bd59-164bc28dc720-kube-api-access-95xlt\") pod \"nova-scheduler-0\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.254010 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5j49\" (UniqueName: \"kubernetes.io/projected/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-kube-api-access-t5j49\") pod \"nova-api-0\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.285890 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.318459 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.319664 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320374 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320437 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn5tv\" (UniqueName: \"kubernetes.io/projected/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-kube-api-access-vn5tv\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320464 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320518 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9s45\" (UniqueName: \"kubernetes.io/projected/1d196d62-fd18-4261-9fc8-1f23060a4848-kube-api-access-b9s45\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320594 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320627 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320667 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-config-data\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320701 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320726 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-config\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.320744 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d196d62-fd18-4261-9fc8-1f23060a4848-logs\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.321097 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d196d62-fd18-4261-9fc8-1f23060a4848-logs\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.337883 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.338010 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.359801 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.384794 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-config-data\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.396340 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9s45\" (UniqueName: \"kubernetes.io/projected/1d196d62-fd18-4261-9fc8-1f23060a4848-kube-api-access-b9s45\") pod \"nova-metadata-0\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.414638 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423512 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423556 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-config\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423743 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn5tv\" (UniqueName: \"kubernetes.io/projected/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-kube-api-access-vn5tv\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423822 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423852 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423934 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjn9w\" (UniqueName: \"kubernetes.io/projected/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-kube-api-access-kjn9w\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423967 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.423998 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.424028 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.424567 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.425327 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.425864 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.425867 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.426240 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-config\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.436894 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.457842 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn5tv\" (UniqueName: \"kubernetes.io/projected/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-kube-api-access-vn5tv\") pod \"dnsmasq-dns-6ffc974fdf-nzm7z\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.530611 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.531847 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.532027 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjn9w\" (UniqueName: \"kubernetes.io/projected/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-kube-api-access-kjn9w\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.543157 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.543762 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.571107 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjn9w\" (UniqueName: \"kubernetes.io/projected/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-kube-api-access-kjn9w\") pod \"nova-cell1-novncproxy-0\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.619755 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr9ph"] Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.660917 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.669944 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.717230 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr9ph" event={"ID":"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae","Type":"ContainerStarted","Data":"ec66e323c50e291cad1a751d133d610004957e1a5dead23df78090b55d75a716"} Oct 08 13:38:37 crc kubenswrapper[5065]: I1008 13:38:37.748276 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.187540 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:38 crc kubenswrapper[5065]: W1008 13:38:38.205367 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65c8282a_ff19_4f87_bd59_164bc28dc720.slice/crio-ceed3d0a6c653d9020af0c3347c0ad94d78ba8781db358760bdd21b8a2862154 WatchSource:0}: Error finding container ceed3d0a6c653d9020af0c3347c0ad94d78ba8781db358760bdd21b8a2862154: Status 404 returned error can't find the container with id ceed3d0a6c653d9020af0c3347c0ad94d78ba8781db358760bdd21b8a2862154 Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.236105 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m8vv"] Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.237350 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.239756 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.239870 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.262349 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m8vv"] Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.345926 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.360623 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-config-data\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.360819 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.360852 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-scripts\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.360877 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgmj\" (UniqueName: \"kubernetes.io/projected/fb3b42da-fb25-4e64-ae1b-1280eb615d00-kube-api-access-mtgmj\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.361571 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:38:38 crc kubenswrapper[5065]: W1008 13:38:38.365485 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d196d62_fd18_4261_9fc8_1f23060a4848.slice/crio-1bf1f102693b464fd1cea2e7ecb0fb0fa5cde4bbc608f9c017d9b23c69b27429 WatchSource:0}: Error finding container 1bf1f102693b464fd1cea2e7ecb0fb0fa5cde4bbc608f9c017d9b23c69b27429: Status 404 returned error can't find the container with id 1bf1f102693b464fd1cea2e7ecb0fb0fa5cde4bbc608f9c017d9b23c69b27429 Oct 08 13:38:38 crc kubenswrapper[5065]: W1008 13:38:38.441061 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae6f2d1c_608d_4ad2_9056_80bb29eac06c.slice/crio-753fa54c598ed5f4812d8a2d024a34f7c289e1850c5ab37541b2ccaf21f20079 WatchSource:0}: Error finding container 753fa54c598ed5f4812d8a2d024a34f7c289e1850c5ab37541b2ccaf21f20079: Status 404 returned error can't find the container with id 753fa54c598ed5f4812d8a2d024a34f7c289e1850c5ab37541b2ccaf21f20079 Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.441461 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-nzm7z"] Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.464666 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.465059 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-scripts\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.465101 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgmj\" (UniqueName: \"kubernetes.io/projected/fb3b42da-fb25-4e64-ae1b-1280eb615d00-kube-api-access-mtgmj\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.465250 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-config-data\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.470351 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.470558 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-config-data\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.474571 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-scripts\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.481193 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgmj\" (UniqueName: \"kubernetes.io/projected/fb3b42da-fb25-4e64-ae1b-1280eb615d00-kube-api-access-mtgmj\") pod \"nova-cell1-conductor-db-sync-7m8vv\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.599014 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.742232 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" event={"ID":"ae6f2d1c-608d-4ad2-9056-80bb29eac06c","Type":"ContainerStarted","Data":"3885d8ce070d1f851bef97e6ef89f12d92919a06dc4a356ba7016ff55246c4c8"} Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.742560 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" event={"ID":"ae6f2d1c-608d-4ad2-9056-80bb29eac06c","Type":"ContainerStarted","Data":"753fa54c598ed5f4812d8a2d024a34f7c289e1850c5ab37541b2ccaf21f20079"} Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.744108 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr9ph" event={"ID":"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae","Type":"ContainerStarted","Data":"f73b309d85997bd26052eea883539b00795b144ef3e0730d9fbcb4739f2c945c"} Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.745249 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4","Type":"ContainerStarted","Data":"46fffb7c396eb285a78c344405198bbeacb99289635f7e561c074e22dd72c541"} Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.746244 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65c8282a-ff19-4f87-bd59-164bc28dc720","Type":"ContainerStarted","Data":"ceed3d0a6c653d9020af0c3347c0ad94d78ba8781db358760bdd21b8a2862154"} Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.747023 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d196d62-fd18-4261-9fc8-1f23060a4848","Type":"ContainerStarted","Data":"1bf1f102693b464fd1cea2e7ecb0fb0fa5cde4bbc608f9c017d9b23c69b27429"} Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.748549 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bcbf9901-3fec-44c5-8560-6c208a0ca8a6","Type":"ContainerStarted","Data":"b991b7a5d705cd9561a4e6938e5e3bd73dddab744fb57205573bffa578a9ab84"} Oct 08 13:38:38 crc kubenswrapper[5065]: I1008 13:38:38.782065 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mr9ph" podStartSLOduration=2.782047191 podStartE2EDuration="2.782047191s" podCreationTimestamp="2025-10-08 13:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:38:38.781070254 +0000 UTC m=+1220.558452021" watchObservedRunningTime="2025-10-08 13:38:38.782047191 +0000 UTC m=+1220.559428948" Oct 08 13:38:39 crc kubenswrapper[5065]: I1008 13:38:39.154611 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m8vv"] Oct 08 13:38:39 crc kubenswrapper[5065]: W1008 13:38:39.164261 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb3b42da_fb25_4e64_ae1b_1280eb615d00.slice/crio-8ece47061ad7db2d0558c88e499d8f68768a7c34cb9e574d0a7f6aa107770d12 WatchSource:0}: Error finding container 8ece47061ad7db2d0558c88e499d8f68768a7c34cb9e574d0a7f6aa107770d12: Status 404 returned error can't find the container with id 8ece47061ad7db2d0558c88e499d8f68768a7c34cb9e574d0a7f6aa107770d12 Oct 08 13:38:39 crc kubenswrapper[5065]: I1008 13:38:39.771840 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" event={"ID":"fb3b42da-fb25-4e64-ae1b-1280eb615d00","Type":"ContainerStarted","Data":"1140116f393d627b40f1e1c2827001066a176489665224e6cb7507310083823c"} Oct 08 13:38:39 crc kubenswrapper[5065]: I1008 13:38:39.772087 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" event={"ID":"fb3b42da-fb25-4e64-ae1b-1280eb615d00","Type":"ContainerStarted","Data":"8ece47061ad7db2d0558c88e499d8f68768a7c34cb9e574d0a7f6aa107770d12"} Oct 08 13:38:39 crc kubenswrapper[5065]: I1008 13:38:39.773841 5065 generic.go:334] "Generic (PLEG): container finished" podID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerID="3885d8ce070d1f851bef97e6ef89f12d92919a06dc4a356ba7016ff55246c4c8" exitCode=0 Oct 08 13:38:39 crc kubenswrapper[5065]: I1008 13:38:39.773945 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" event={"ID":"ae6f2d1c-608d-4ad2-9056-80bb29eac06c","Type":"ContainerDied","Data":"3885d8ce070d1f851bef97e6ef89f12d92919a06dc4a356ba7016ff55246c4c8"} Oct 08 13:38:39 crc kubenswrapper[5065]: I1008 13:38:39.789671 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" podStartSLOduration=1.789652053 podStartE2EDuration="1.789652053s" podCreationTimestamp="2025-10-08 13:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:38:39.786142465 +0000 UTC m=+1221.563524242" watchObservedRunningTime="2025-10-08 13:38:39.789652053 +0000 UTC m=+1221.567033810" Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.103180 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.113407 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.800736 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" event={"ID":"ae6f2d1c-608d-4ad2-9056-80bb29eac06c","Type":"ContainerStarted","Data":"123d0d3503c977fb8e29547b00bade3ab25c9379036637b70e294431d5b56bf0"} Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.801920 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.803851 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4","Type":"ContainerStarted","Data":"5e46b585a3b70055fa546cd0b8aeb41343b5665f4b3ad2798a120cec4080e9b7"} Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.803873 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4","Type":"ContainerStarted","Data":"eb94245b05a0b8efe37facae1f11301674a53c40fb312a61967a66f36348470c"} Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.805781 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65c8282a-ff19-4f87-bd59-164bc28dc720","Type":"ContainerStarted","Data":"4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34"} Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.808032 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d196d62-fd18-4261-9fc8-1f23060a4848","Type":"ContainerStarted","Data":"c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5"} Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.808058 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d196d62-fd18-4261-9fc8-1f23060a4848","Type":"ContainerStarted","Data":"c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308"} Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.808146 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-log" containerID="cri-o://c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308" gracePeriod=30 Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.808390 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-metadata" containerID="cri-o://c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5" gracePeriod=30 Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.810305 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bcbf9901-3fec-44c5-8560-6c208a0ca8a6","Type":"ContainerStarted","Data":"20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0"} Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.810382 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bcbf9901-3fec-44c5-8560-6c208a0ca8a6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0" gracePeriod=30 Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.824241 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" podStartSLOduration=5.82422171 podStartE2EDuration="5.82422171s" podCreationTimestamp="2025-10-08 13:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:38:42.821224406 +0000 UTC m=+1224.598606173" watchObservedRunningTime="2025-10-08 13:38:42.82422171 +0000 UTC m=+1224.601603467" Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.852609 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.439576624 podStartE2EDuration="5.852587832s" podCreationTimestamp="2025-10-08 13:38:37 +0000 UTC" firstStartedPulling="2025-10-08 13:38:38.369979081 +0000 UTC m=+1220.147360838" lastFinishedPulling="2025-10-08 13:38:41.782990289 +0000 UTC m=+1223.560372046" observedRunningTime="2025-10-08 13:38:42.844891707 +0000 UTC m=+1224.622273474" watchObservedRunningTime="2025-10-08 13:38:42.852587832 +0000 UTC m=+1224.629969589" Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.900087 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.325212611 podStartE2EDuration="6.900069218s" podCreationTimestamp="2025-10-08 13:38:36 +0000 UTC" firstStartedPulling="2025-10-08 13:38:38.230540487 +0000 UTC m=+1220.007922244" lastFinishedPulling="2025-10-08 13:38:41.805397074 +0000 UTC m=+1223.582778851" observedRunningTime="2025-10-08 13:38:42.892919909 +0000 UTC m=+1224.670301666" watchObservedRunningTime="2025-10-08 13:38:42.900069218 +0000 UTC m=+1224.677450975" Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.900801 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.4311389 podStartE2EDuration="5.900794979s" podCreationTimestamp="2025-10-08 13:38:37 +0000 UTC" firstStartedPulling="2025-10-08 13:38:38.359390496 +0000 UTC m=+1220.136772253" lastFinishedPulling="2025-10-08 13:38:41.829046545 +0000 UTC m=+1223.606428332" observedRunningTime="2025-10-08 13:38:42.879948846 +0000 UTC m=+1224.657330603" watchObservedRunningTime="2025-10-08 13:38:42.900794979 +0000 UTC m=+1224.678176736" Oct 08 13:38:42 crc kubenswrapper[5065]: I1008 13:38:42.915969 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.928205922 podStartE2EDuration="6.915951542s" podCreationTimestamp="2025-10-08 13:38:36 +0000 UTC" firstStartedPulling="2025-10-08 13:38:37.81606584 +0000 UTC m=+1219.593447597" lastFinishedPulling="2025-10-08 13:38:41.80381146 +0000 UTC m=+1223.581193217" observedRunningTime="2025-10-08 13:38:42.913470113 +0000 UTC m=+1224.690851880" watchObservedRunningTime="2025-10-08 13:38:42.915951542 +0000 UTC m=+1224.693333299" Oct 08 13:38:43 crc kubenswrapper[5065]: I1008 13:38:43.821498 5065 generic.go:334] "Generic (PLEG): container finished" podID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerID="c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308" exitCode=143 Oct 08 13:38:43 crc kubenswrapper[5065]: I1008 13:38:43.821582 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d196d62-fd18-4261-9fc8-1f23060a4848","Type":"ContainerDied","Data":"c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308"} Oct 08 13:38:45 crc kubenswrapper[5065]: I1008 13:38:45.840200 5065 generic.go:334] "Generic (PLEG): container finished" podID="98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" containerID="f73b309d85997bd26052eea883539b00795b144ef3e0730d9fbcb4739f2c945c" exitCode=0 Oct 08 13:38:45 crc kubenswrapper[5065]: I1008 13:38:45.840310 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr9ph" event={"ID":"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae","Type":"ContainerDied","Data":"f73b309d85997bd26052eea883539b00795b144ef3e0730d9fbcb4739f2c945c"} Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.242316 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.291731 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.291773 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.342694 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twptb\" (UniqueName: \"kubernetes.io/projected/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-kube-api-access-twptb\") pod \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.342757 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-combined-ca-bundle\") pod \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.342968 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-scripts\") pod \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.343132 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-config-data\") pod \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\" (UID: \"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae\") " Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.349024 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-kube-api-access-twptb" (OuterVolumeSpecName: "kube-api-access-twptb") pod "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" (UID: "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae"). InnerVolumeSpecName "kube-api-access-twptb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.349933 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-scripts" (OuterVolumeSpecName: "scripts") pod "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" (UID: "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.369533 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-config-data" (OuterVolumeSpecName: "config-data") pod "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" (UID: "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.383433 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" (UID: "98fca4dc-7e0d-4bb2-bf66-7f70f13802ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.414734 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.414806 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.437424 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.437472 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.445682 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.446779 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.446817 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twptb\" (UniqueName: \"kubernetes.io/projected/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-kube-api-access-twptb\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.446831 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.446841 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.662689 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.671041 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.776383 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-tnxx4"] Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.776768 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" podUID="69902974-784e-48c1-af4a-49758dcbaa6f" containerName="dnsmasq-dns" containerID="cri-o://e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04" gracePeriod=10 Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.861177 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr9ph" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.861465 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr9ph" event={"ID":"98fca4dc-7e0d-4bb2-bf66-7f70f13802ae","Type":"ContainerDied","Data":"ec66e323c50e291cad1a751d133d610004957e1a5dead23df78090b55d75a716"} Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.861508 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec66e323c50e291cad1a751d133d610004957e1a5dead23df78090b55d75a716" Oct 08 13:38:47 crc kubenswrapper[5065]: I1008 13:38:47.933458 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.055328 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.055613 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-log" containerID="cri-o://eb94245b05a0b8efe37facae1f11301674a53c40fb312a61967a66f36348470c" gracePeriod=30 Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.056342 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-api" containerID="cri-o://5e46b585a3b70055fa546cd0b8aeb41343b5665f4b3ad2798a120cec4080e9b7" gracePeriod=30 Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.071837 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": EOF" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.071990 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": EOF" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.352750 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.475118 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-nb\") pod \"69902974-784e-48c1-af4a-49758dcbaa6f\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.475169 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-svc\") pod \"69902974-784e-48c1-af4a-49758dcbaa6f\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.475237 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-config\") pod \"69902974-784e-48c1-af4a-49758dcbaa6f\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.475254 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-swift-storage-0\") pod \"69902974-784e-48c1-af4a-49758dcbaa6f\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.475364 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb\") pod \"69902974-784e-48c1-af4a-49758dcbaa6f\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.475454 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmdvr\" (UniqueName: \"kubernetes.io/projected/69902974-784e-48c1-af4a-49758dcbaa6f-kube-api-access-gmdvr\") pod \"69902974-784e-48c1-af4a-49758dcbaa6f\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.485845 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69902974-784e-48c1-af4a-49758dcbaa6f-kube-api-access-gmdvr" (OuterVolumeSpecName: "kube-api-access-gmdvr") pod "69902974-784e-48c1-af4a-49758dcbaa6f" (UID: "69902974-784e-48c1-af4a-49758dcbaa6f"). InnerVolumeSpecName "kube-api-access-gmdvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.526013 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.565232 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "69902974-784e-48c1-af4a-49758dcbaa6f" (UID: "69902974-784e-48c1-af4a-49758dcbaa6f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.572351 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69902974-784e-48c1-af4a-49758dcbaa6f" (UID: "69902974-784e-48c1-af4a-49758dcbaa6f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.577453 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69902974-784e-48c1-af4a-49758dcbaa6f" (UID: "69902974-784e-48c1-af4a-49758dcbaa6f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.577646 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb\") pod \"69902974-784e-48c1-af4a-49758dcbaa6f\" (UID: \"69902974-784e-48c1-af4a-49758dcbaa6f\") " Oct 08 13:38:48 crc kubenswrapper[5065]: W1008 13:38:48.578023 5065 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/69902974-784e-48c1-af4a-49758dcbaa6f/volumes/kubernetes.io~configmap/ovsdbserver-sb Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.578035 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69902974-784e-48c1-af4a-49758dcbaa6f" (UID: "69902974-784e-48c1-af4a-49758dcbaa6f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.579186 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.579245 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmdvr\" (UniqueName: \"kubernetes.io/projected/69902974-784e-48c1-af4a-49758dcbaa6f-kube-api-access-gmdvr\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.579290 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.579333 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.579857 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69902974-784e-48c1-af4a-49758dcbaa6f" (UID: "69902974-784e-48c1-af4a-49758dcbaa6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.602686 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-config" (OuterVolumeSpecName: "config") pod "69902974-784e-48c1-af4a-49758dcbaa6f" (UID: "69902974-784e-48c1-af4a-49758dcbaa6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.680517 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.680547 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69902974-784e-48c1-af4a-49758dcbaa6f-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.883124 5065 generic.go:334] "Generic (PLEG): container finished" podID="69902974-784e-48c1-af4a-49758dcbaa6f" containerID="e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04" exitCode=0 Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.883249 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.895677 5065 generic.go:334] "Generic (PLEG): container finished" podID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerID="eb94245b05a0b8efe37facae1f11301674a53c40fb312a61967a66f36348470c" exitCode=143 Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.897961 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" event={"ID":"69902974-784e-48c1-af4a-49758dcbaa6f","Type":"ContainerDied","Data":"e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04"} Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.898011 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-tnxx4" event={"ID":"69902974-784e-48c1-af4a-49758dcbaa6f","Type":"ContainerDied","Data":"98a8d351fe04cf9e7457a227dfff51391f1858324d777302ce4065de313f01ed"} Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.898025 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4","Type":"ContainerDied","Data":"eb94245b05a0b8efe37facae1f11301674a53c40fb312a61967a66f36348470c"} Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.898046 5065 scope.go:117] "RemoveContainer" containerID="e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.922935 5065 scope.go:117] "RemoveContainer" containerID="d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b" Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.984557 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-tnxx4"] Oct 08 13:38:48 crc kubenswrapper[5065]: I1008 13:38:48.995080 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-tnxx4"] Oct 08 13:38:49 crc kubenswrapper[5065]: I1008 13:38:49.010553 5065 scope.go:117] "RemoveContainer" containerID="e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04" Oct 08 13:38:49 crc kubenswrapper[5065]: E1008 13:38:49.011762 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04\": container with ID starting with e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04 not found: ID does not exist" containerID="e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04" Oct 08 13:38:49 crc kubenswrapper[5065]: I1008 13:38:49.011792 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04"} err="failed to get container status \"e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04\": rpc error: code = NotFound desc = could not find container \"e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04\": container with ID starting with e2c222d790912f131ac7d7f129438b01d7050269a304588690a53d5894198e04 not found: ID does not exist" Oct 08 13:38:49 crc kubenswrapper[5065]: I1008 13:38:49.011813 5065 scope.go:117] "RemoveContainer" containerID="d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b" Oct 08 13:38:49 crc kubenswrapper[5065]: E1008 13:38:49.012206 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b\": container with ID starting with d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b not found: ID does not exist" containerID="d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b" Oct 08 13:38:49 crc kubenswrapper[5065]: I1008 13:38:49.012225 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b"} err="failed to get container status \"d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b\": rpc error: code = NotFound desc = could not find container \"d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b\": container with ID starting with d640b361a66b5a3136205e21f891f79893faebc72034c975a9e886ca8a63419b not found: ID does not exist" Oct 08 13:38:49 crc kubenswrapper[5065]: I1008 13:38:49.911325 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="65c8282a-ff19-4f87-bd59-164bc28dc720" containerName="nova-scheduler-scheduler" containerID="cri-o://4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" gracePeriod=30 Oct 08 13:38:50 crc kubenswrapper[5065]: I1008 13:38:50.889850 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69902974-784e-48c1-af4a-49758dcbaa6f" path="/var/lib/kubelet/pods/69902974-784e-48c1-af4a-49758dcbaa6f/volumes" Oct 08 13:38:51 crc kubenswrapper[5065]: I1008 13:38:51.927959 5065 generic.go:334] "Generic (PLEG): container finished" podID="fb3b42da-fb25-4e64-ae1b-1280eb615d00" containerID="1140116f393d627b40f1e1c2827001066a176489665224e6cb7507310083823c" exitCode=0 Oct 08 13:38:51 crc kubenswrapper[5065]: I1008 13:38:51.928002 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" event={"ID":"fb3b42da-fb25-4e64-ae1b-1280eb615d00","Type":"ContainerDied","Data":"1140116f393d627b40f1e1c2827001066a176489665224e6cb7507310083823c"} Oct 08 13:38:52 crc kubenswrapper[5065]: E1008 13:38:52.417032 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:38:52 crc kubenswrapper[5065]: E1008 13:38:52.418734 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:38:52 crc kubenswrapper[5065]: E1008 13:38:52.420005 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:38:52 crc kubenswrapper[5065]: E1008 13:38:52.420034 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="65c8282a-ff19-4f87-bd59-164bc28dc720" containerName="nova-scheduler-scheduler" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.282757 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.468996 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-combined-ca-bundle\") pod \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.469408 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtgmj\" (UniqueName: \"kubernetes.io/projected/fb3b42da-fb25-4e64-ae1b-1280eb615d00-kube-api-access-mtgmj\") pod \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.469561 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-config-data\") pod \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.469585 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-scripts\") pod \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\" (UID: \"fb3b42da-fb25-4e64-ae1b-1280eb615d00\") " Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.475317 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-scripts" (OuterVolumeSpecName: "scripts") pod "fb3b42da-fb25-4e64-ae1b-1280eb615d00" (UID: "fb3b42da-fb25-4e64-ae1b-1280eb615d00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.490824 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3b42da-fb25-4e64-ae1b-1280eb615d00-kube-api-access-mtgmj" (OuterVolumeSpecName: "kube-api-access-mtgmj") pod "fb3b42da-fb25-4e64-ae1b-1280eb615d00" (UID: "fb3b42da-fb25-4e64-ae1b-1280eb615d00"). InnerVolumeSpecName "kube-api-access-mtgmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.496122 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb3b42da-fb25-4e64-ae1b-1280eb615d00" (UID: "fb3b42da-fb25-4e64-ae1b-1280eb615d00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.498225 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-config-data" (OuterVolumeSpecName: "config-data") pod "fb3b42da-fb25-4e64-ae1b-1280eb615d00" (UID: "fb3b42da-fb25-4e64-ae1b-1280eb615d00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.571577 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.571605 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.571614 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b42da-fb25-4e64-ae1b-1280eb615d00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.571625 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtgmj\" (UniqueName: \"kubernetes.io/projected/fb3b42da-fb25-4e64-ae1b-1280eb615d00-kube-api-access-mtgmj\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.923734 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.974152 5065 generic.go:334] "Generic (PLEG): container finished" podID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerID="5e46b585a3b70055fa546cd0b8aeb41343b5665f4b3ad2798a120cec4080e9b7" exitCode=0 Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.974221 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4","Type":"ContainerDied","Data":"5e46b585a3b70055fa546cd0b8aeb41343b5665f4b3ad2798a120cec4080e9b7"} Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.982438 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95xlt\" (UniqueName: \"kubernetes.io/projected/65c8282a-ff19-4f87-bd59-164bc28dc720-kube-api-access-95xlt\") pod \"65c8282a-ff19-4f87-bd59-164bc28dc720\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.982587 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-config-data\") pod \"65c8282a-ff19-4f87-bd59-164bc28dc720\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.982640 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-combined-ca-bundle\") pod \"65c8282a-ff19-4f87-bd59-164bc28dc720\" (UID: \"65c8282a-ff19-4f87-bd59-164bc28dc720\") " Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.983957 5065 generic.go:334] "Generic (PLEG): container finished" podID="65c8282a-ff19-4f87-bd59-164bc28dc720" containerID="4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" exitCode=0 Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.984013 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65c8282a-ff19-4f87-bd59-164bc28dc720","Type":"ContainerDied","Data":"4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34"} Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.984038 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65c8282a-ff19-4f87-bd59-164bc28dc720","Type":"ContainerDied","Data":"ceed3d0a6c653d9020af0c3347c0ad94d78ba8781db358760bdd21b8a2862154"} Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.984055 5065 scope.go:117] "RemoveContainer" containerID="4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.984161 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.988135 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c8282a-ff19-4f87-bd59-164bc28dc720-kube-api-access-95xlt" (OuterVolumeSpecName: "kube-api-access-95xlt") pod "65c8282a-ff19-4f87-bd59-164bc28dc720" (UID: "65c8282a-ff19-4f87-bd59-164bc28dc720"). InnerVolumeSpecName "kube-api-access-95xlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.993101 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" event={"ID":"fb3b42da-fb25-4e64-ae1b-1280eb615d00","Type":"ContainerDied","Data":"8ece47061ad7db2d0558c88e499d8f68768a7c34cb9e574d0a7f6aa107770d12"} Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.993148 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ece47061ad7db2d0558c88e499d8f68768a7c34cb9e574d0a7f6aa107770d12" Oct 08 13:38:53 crc kubenswrapper[5065]: I1008 13:38:53.993215 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m8vv" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.035053 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.037845 5065 scope.go:117] "RemoveContainer" containerID="4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.038222 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34\": container with ID starting with 4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34 not found: ID does not exist" containerID="4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.038263 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34"} err="failed to get container status \"4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34\": rpc error: code = NotFound desc = could not find container \"4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34\": container with ID starting with 4659dd290d9ef0f309fbe6a92d18bda440b0045b4bc34bcc5568e5d065633a34 not found: ID does not exist" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.039807 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65c8282a-ff19-4f87-bd59-164bc28dc720" (UID: "65c8282a-ff19-4f87-bd59-164bc28dc720"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.049831 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-config-data" (OuterVolumeSpecName: "config-data") pod "65c8282a-ff19-4f87-bd59-164bc28dc720" (UID: "65c8282a-ff19-4f87-bd59-164bc28dc720"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.059677 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.060144 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-api" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060163 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-api" Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.060182 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c8282a-ff19-4f87-bd59-164bc28dc720" containerName="nova-scheduler-scheduler" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060190 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c8282a-ff19-4f87-bd59-164bc28dc720" containerName="nova-scheduler-scheduler" Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.060209 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69902974-784e-48c1-af4a-49758dcbaa6f" containerName="dnsmasq-dns" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060217 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="69902974-784e-48c1-af4a-49758dcbaa6f" containerName="dnsmasq-dns" Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.060242 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3b42da-fb25-4e64-ae1b-1280eb615d00" containerName="nova-cell1-conductor-db-sync" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060251 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3b42da-fb25-4e64-ae1b-1280eb615d00" containerName="nova-cell1-conductor-db-sync" Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.060274 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69902974-784e-48c1-af4a-49758dcbaa6f" containerName="init" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060283 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="69902974-784e-48c1-af4a-49758dcbaa6f" containerName="init" Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.060298 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" containerName="nova-manage" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060305 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" containerName="nova-manage" Oct 08 13:38:54 crc kubenswrapper[5065]: E1008 13:38:54.060319 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-log" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060327 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-log" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060556 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-api" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060579 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c8282a-ff19-4f87-bd59-164bc28dc720" containerName="nova-scheduler-scheduler" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060589 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3b42da-fb25-4e64-ae1b-1280eb615d00" containerName="nova-cell1-conductor-db-sync" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060604 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" containerName="nova-api-log" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060627 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" containerName="nova-manage" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.060639 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="69902974-784e-48c1-af4a-49758dcbaa6f" containerName="dnsmasq-dns" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.061505 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.065472 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.069643 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.088390 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95xlt\" (UniqueName: \"kubernetes.io/projected/65c8282a-ff19-4f87-bd59-164bc28dc720-kube-api-access-95xlt\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.088446 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.088457 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c8282a-ff19-4f87-bd59-164bc28dc720-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190043 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-config-data\") pod \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190106 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-logs\") pod \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190237 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5j49\" (UniqueName: \"kubernetes.io/projected/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-kube-api-access-t5j49\") pod \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190285 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-combined-ca-bundle\") pod \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\" (UID: \"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4\") " Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190623 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190669 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190745 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4hq\" (UniqueName: \"kubernetes.io/projected/14ab13f6-4348-4848-9149-4d1ee240d1ed-kube-api-access-xz4hq\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.190795 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-logs" (OuterVolumeSpecName: "logs") pod "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" (UID: "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.193768 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-kube-api-access-t5j49" (OuterVolumeSpecName: "kube-api-access-t5j49") pod "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" (UID: "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4"). InnerVolumeSpecName "kube-api-access-t5j49". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.218995 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-config-data" (OuterVolumeSpecName: "config-data") pod "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" (UID: "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.223052 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" (UID: "c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.292004 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4hq\" (UniqueName: \"kubernetes.io/projected/14ab13f6-4348-4848-9149-4d1ee240d1ed-kube-api-access-xz4hq\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.292143 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.292188 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.292244 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.292258 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.292270 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5j49\" (UniqueName: \"kubernetes.io/projected/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-kube-api-access-t5j49\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.292282 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.297529 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.297569 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.312450 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4hq\" (UniqueName: \"kubernetes.io/projected/14ab13f6-4348-4848-9149-4d1ee240d1ed-kube-api-access-xz4hq\") pod \"nova-cell1-conductor-0\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.375114 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.375166 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.378299 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.410648 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.421578 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.429708 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.431051 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.434341 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.461604 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.598102 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-config-data\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.598183 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxmg\" (UniqueName: \"kubernetes.io/projected/3629fb73-6155-44ec-b119-4da654503863-kube-api-access-rxxmg\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.598233 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.699881 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-config-data\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.699940 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxmg\" (UniqueName: \"kubernetes.io/projected/3629fb73-6155-44ec-b119-4da654503863-kube-api-access-rxxmg\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.699976 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.705156 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.707959 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-config-data\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.718199 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxmg\" (UniqueName: \"kubernetes.io/projected/3629fb73-6155-44ec-b119-4da654503863-kube-api-access-rxxmg\") pod \"nova-scheduler-0\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.767325 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.893051 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c8282a-ff19-4f87-bd59-164bc28dc720" path="/var/lib/kubelet/pods/65c8282a-ff19-4f87-bd59-164bc28dc720/volumes" Oct 08 13:38:54 crc kubenswrapper[5065]: I1008 13:38:54.894260 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 13:38:54 crc kubenswrapper[5065]: W1008 13:38:54.906101 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14ab13f6_4348_4848_9149_4d1ee240d1ed.slice/crio-736112f627b33774274a94e82f1c964a9798932141b0318ba04044bc75632bef WatchSource:0}: Error finding container 736112f627b33774274a94e82f1c964a9798932141b0318ba04044bc75632bef: Status 404 returned error can't find the container with id 736112f627b33774274a94e82f1c964a9798932141b0318ba04044bc75632bef Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.005734 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4","Type":"ContainerDied","Data":"46fffb7c396eb285a78c344405198bbeacb99289635f7e561c074e22dd72c541"} Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.005794 5065 scope.go:117] "RemoveContainer" containerID="5e46b585a3b70055fa546cd0b8aeb41343b5665f4b3ad2798a120cec4080e9b7" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.005800 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.013215 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"14ab13f6-4348-4848-9149-4d1ee240d1ed","Type":"ContainerStarted","Data":"736112f627b33774274a94e82f1c964a9798932141b0318ba04044bc75632bef"} Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.054737 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.061798 5065 scope.go:117] "RemoveContainer" containerID="eb94245b05a0b8efe37facae1f11301674a53c40fb312a61967a66f36348470c" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.067013 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.075967 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.077717 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.079799 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.085425 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.212760 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.212805 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f56ba4-d900-4098-90f2-ae5ccd32357f-logs\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.212829 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t467\" (UniqueName: \"kubernetes.io/projected/02f56ba4-d900-4098-90f2-ae5ccd32357f-kube-api-access-8t467\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.212881 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-config-data\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.248149 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:38:55 crc kubenswrapper[5065]: W1008 13:38:55.252538 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3629fb73_6155_44ec_b119_4da654503863.slice/crio-02f23b1c88f36f5c1a5f7589005d2f3fccddeec409fa599685c3200b2dcdf3b6 WatchSource:0}: Error finding container 02f23b1c88f36f5c1a5f7589005d2f3fccddeec409fa599685c3200b2dcdf3b6: Status 404 returned error can't find the container with id 02f23b1c88f36f5c1a5f7589005d2f3fccddeec409fa599685c3200b2dcdf3b6 Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.314683 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-config-data\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.314923 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.314983 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f56ba4-d900-4098-90f2-ae5ccd32357f-logs\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.315008 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t467\" (UniqueName: \"kubernetes.io/projected/02f56ba4-d900-4098-90f2-ae5ccd32357f-kube-api-access-8t467\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.316149 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f56ba4-d900-4098-90f2-ae5ccd32357f-logs\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.319264 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-config-data\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.319344 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.331677 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t467\" (UniqueName: \"kubernetes.io/projected/02f56ba4-d900-4098-90f2-ae5ccd32357f-kube-api-access-8t467\") pod \"nova-api-0\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.404340 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:38:55 crc kubenswrapper[5065]: I1008 13:38:55.877359 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:38:55 crc kubenswrapper[5065]: W1008 13:38:55.879246 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02f56ba4_d900_4098_90f2_ae5ccd32357f.slice/crio-6a0bc4551fe65e364a35270079b1832789905bcb2b2d39f1422c5c8d0f45296f WatchSource:0}: Error finding container 6a0bc4551fe65e364a35270079b1832789905bcb2b2d39f1422c5c8d0f45296f: Status 404 returned error can't find the container with id 6a0bc4551fe65e364a35270079b1832789905bcb2b2d39f1422c5c8d0f45296f Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.027186 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"14ab13f6-4348-4848-9149-4d1ee240d1ed","Type":"ContainerStarted","Data":"3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f"} Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.027269 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.030028 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3629fb73-6155-44ec-b119-4da654503863","Type":"ContainerStarted","Data":"41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45"} Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.030070 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3629fb73-6155-44ec-b119-4da654503863","Type":"ContainerStarted","Data":"02f23b1c88f36f5c1a5f7589005d2f3fccddeec409fa599685c3200b2dcdf3b6"} Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.034044 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02f56ba4-d900-4098-90f2-ae5ccd32357f","Type":"ContainerStarted","Data":"6a0bc4551fe65e364a35270079b1832789905bcb2b2d39f1422c5c8d0f45296f"} Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.052324 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.052303261 podStartE2EDuration="2.052303261s" podCreationTimestamp="2025-10-08 13:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:38:56.042607767 +0000 UTC m=+1237.819989524" watchObservedRunningTime="2025-10-08 13:38:56.052303261 +0000 UTC m=+1237.829685018" Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.064694 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.064654258 podStartE2EDuration="2.064654258s" podCreationTimestamp="2025-10-08 13:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:38:56.060125111 +0000 UTC m=+1237.837506868" watchObservedRunningTime="2025-10-08 13:38:56.064654258 +0000 UTC m=+1237.842036015" Oct 08 13:38:56 crc kubenswrapper[5065]: I1008 13:38:56.889790 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4" path="/var/lib/kubelet/pods/c0ebdb6e-39ba-44ca-a37f-0a72854b9aa4/volumes" Oct 08 13:38:57 crc kubenswrapper[5065]: I1008 13:38:57.047636 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02f56ba4-d900-4098-90f2-ae5ccd32357f","Type":"ContainerStarted","Data":"873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e"} Oct 08 13:38:57 crc kubenswrapper[5065]: I1008 13:38:57.048066 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02f56ba4-d900-4098-90f2-ae5ccd32357f","Type":"ContainerStarted","Data":"4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae"} Oct 08 13:38:57 crc kubenswrapper[5065]: I1008 13:38:57.994642 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 13:38:58 crc kubenswrapper[5065]: I1008 13:38:58.106602 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.106577363 podStartE2EDuration="3.106577363s" podCreationTimestamp="2025-10-08 13:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:38:58.099568106 +0000 UTC m=+1239.876949883" watchObservedRunningTime="2025-10-08 13:38:58.106577363 +0000 UTC m=+1239.883959120" Oct 08 13:38:59 crc kubenswrapper[5065]: I1008 13:38:59.767511 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 13:39:01 crc kubenswrapper[5065]: I1008 13:39:01.486355 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:39:01 crc kubenswrapper[5065]: I1008 13:39:01.486914 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="de6b79be-fc23-4b08-bc30-192946f827af" containerName="kube-state-metrics" containerID="cri-o://b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9" gracePeriod=30 Oct 08 13:39:01 crc kubenswrapper[5065]: I1008 13:39:01.946171 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.049198 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j676f\" (UniqueName: \"kubernetes.io/projected/de6b79be-fc23-4b08-bc30-192946f827af-kube-api-access-j676f\") pod \"de6b79be-fc23-4b08-bc30-192946f827af\" (UID: \"de6b79be-fc23-4b08-bc30-192946f827af\") " Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.054918 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6b79be-fc23-4b08-bc30-192946f827af-kube-api-access-j676f" (OuterVolumeSpecName: "kube-api-access-j676f") pod "de6b79be-fc23-4b08-bc30-192946f827af" (UID: "de6b79be-fc23-4b08-bc30-192946f827af"). InnerVolumeSpecName "kube-api-access-j676f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.117970 5065 generic.go:334] "Generic (PLEG): container finished" podID="de6b79be-fc23-4b08-bc30-192946f827af" containerID="b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9" exitCode=2 Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.118037 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de6b79be-fc23-4b08-bc30-192946f827af","Type":"ContainerDied","Data":"b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9"} Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.118077 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de6b79be-fc23-4b08-bc30-192946f827af","Type":"ContainerDied","Data":"ac6cb4ebb0a4079584f6062f4b622297c4b292d76613b132b77d70099fe9b037"} Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.118098 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.118106 5065 scope.go:117] "RemoveContainer" containerID="b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.139197 5065 scope.go:117] "RemoveContainer" containerID="b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9" Oct 08 13:39:02 crc kubenswrapper[5065]: E1008 13:39:02.140546 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9\": container with ID starting with b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9 not found: ID does not exist" containerID="b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.140577 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9"} err="failed to get container status \"b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9\": rpc error: code = NotFound desc = could not find container \"b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9\": container with ID starting with b51dcdd3f01b58385b881697bb2b6d5395b8e36ee0ccbcc68fb485c141d9f1a9 not found: ID does not exist" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.152255 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j676f\" (UniqueName: \"kubernetes.io/projected/de6b79be-fc23-4b08-bc30-192946f827af-kube-api-access-j676f\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.172128 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.185898 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.194684 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:39:02 crc kubenswrapper[5065]: E1008 13:39:02.195234 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6b79be-fc23-4b08-bc30-192946f827af" containerName="kube-state-metrics" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.195252 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6b79be-fc23-4b08-bc30-192946f827af" containerName="kube-state-metrics" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.195485 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6b79be-fc23-4b08-bc30-192946f827af" containerName="kube-state-metrics" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.196130 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.202130 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.202361 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.205774 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.355373 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.355721 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.355809 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55g5z\" (UniqueName: \"kubernetes.io/projected/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-api-access-55g5z\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.355844 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.457506 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55g5z\" (UniqueName: \"kubernetes.io/projected/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-api-access-55g5z\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.457863 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.458138 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.458257 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.462027 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.464506 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.464778 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.472447 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55g5z\" (UniqueName: \"kubernetes.io/projected/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-api-access-55g5z\") pod \"kube-state-metrics-0\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.519056 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.888094 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6b79be-fc23-4b08-bc30-192946f827af" path="/var/lib/kubelet/pods/de6b79be-fc23-4b08-bc30-192946f827af/volumes" Oct 08 13:39:02 crc kubenswrapper[5065]: I1008 13:39:02.986565 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:39:02 crc kubenswrapper[5065]: W1008 13:39:02.995247 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcde619b2_b551_4a41_b2f2_c38f1b507a82.slice/crio-3abc5a3d6943979f9bdb9a56e241cb87a53a2eb6cfcda0cf628e8b354d551042 WatchSource:0}: Error finding container 3abc5a3d6943979f9bdb9a56e241cb87a53a2eb6cfcda0cf628e8b354d551042: Status 404 returned error can't find the container with id 3abc5a3d6943979f9bdb9a56e241cb87a53a2eb6cfcda0cf628e8b354d551042 Oct 08 13:39:03 crc kubenswrapper[5065]: I1008 13:39:03.129055 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cde619b2-b551-4a41-b2f2-c38f1b507a82","Type":"ContainerStarted","Data":"3abc5a3d6943979f9bdb9a56e241cb87a53a2eb6cfcda0cf628e8b354d551042"} Oct 08 13:39:03 crc kubenswrapper[5065]: I1008 13:39:03.166845 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:03 crc kubenswrapper[5065]: I1008 13:39:03.167225 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="sg-core" containerID="cri-o://61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639" gracePeriod=30 Oct 08 13:39:03 crc kubenswrapper[5065]: I1008 13:39:03.167240 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="proxy-httpd" containerID="cri-o://15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908" gracePeriod=30 Oct 08 13:39:03 crc kubenswrapper[5065]: I1008 13:39:03.167303 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-notification-agent" containerID="cri-o://eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68" gracePeriod=30 Oct 08 13:39:03 crc kubenswrapper[5065]: I1008 13:39:03.167394 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-central-agent" containerID="cri-o://ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa" gracePeriod=30 Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.138463 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cde619b2-b551-4a41-b2f2-c38f1b507a82","Type":"ContainerStarted","Data":"1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e"} Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.138765 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.142719 5065 generic.go:334] "Generic (PLEG): container finished" podID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerID="15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908" exitCode=0 Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.142741 5065 generic.go:334] "Generic (PLEG): container finished" podID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerID="61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639" exitCode=2 Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.142748 5065 generic.go:334] "Generic (PLEG): container finished" podID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerID="ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa" exitCode=0 Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.142764 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerDied","Data":"15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908"} Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.142779 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerDied","Data":"61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639"} Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.142790 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerDied","Data":"ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa"} Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.161568 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.794016864 podStartE2EDuration="2.161546245s" podCreationTimestamp="2025-10-08 13:39:02 +0000 UTC" firstStartedPulling="2025-10-08 13:39:02.997487463 +0000 UTC m=+1244.774869230" lastFinishedPulling="2025-10-08 13:39:03.365016854 +0000 UTC m=+1245.142398611" observedRunningTime="2025-10-08 13:39:04.15390012 +0000 UTC m=+1245.931281887" watchObservedRunningTime="2025-10-08 13:39:04.161546245 +0000 UTC m=+1245.938928002" Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.411112 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.767718 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 13:39:04 crc kubenswrapper[5065]: I1008 13:39:04.798077 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 13:39:05 crc kubenswrapper[5065]: I1008 13:39:05.181198 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 13:39:05 crc kubenswrapper[5065]: I1008 13:39:05.405546 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 13:39:05 crc kubenswrapper[5065]: I1008 13:39:05.405605 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 13:39:06 crc kubenswrapper[5065]: I1008 13:39:06.487623 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:06 crc kubenswrapper[5065]: I1008 13:39:06.487902 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.084895 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.165408 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-sg-core-conf-yaml\") pod \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.165491 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-scripts\") pod \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.165529 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xllrl\" (UniqueName: \"kubernetes.io/projected/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-kube-api-access-xllrl\") pod \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.165584 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-log-httpd\") pod \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.165624 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-config-data\") pod \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.165690 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-combined-ca-bundle\") pod \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.165716 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-run-httpd\") pod \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\" (UID: \"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2\") " Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.166210 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" (UID: "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.166562 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" (UID: "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.166589 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.177431 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-scripts" (OuterVolumeSpecName: "scripts") pod "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" (UID: "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.183752 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-kube-api-access-xllrl" (OuterVolumeSpecName: "kube-api-access-xllrl") pod "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" (UID: "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2"). InnerVolumeSpecName "kube-api-access-xllrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.186489 5065 generic.go:334] "Generic (PLEG): container finished" podID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerID="eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68" exitCode=0 Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.186534 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerDied","Data":"eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68"} Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.186563 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97358ac5-7f8d-4a1c-ae8a-9f981730c1b2","Type":"ContainerDied","Data":"3c1c250c587e0e41773b9925f65a735adc5b6716370192b774caef515b254207"} Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.186582 5065 scope.go:117] "RemoveContainer" containerID="15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.186740 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.211351 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" (UID: "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.253573 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" (UID: "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.268648 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.268683 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.268696 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xllrl\" (UniqueName: \"kubernetes.io/projected/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-kube-api-access-xllrl\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.268710 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.268720 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.278655 5065 scope.go:117] "RemoveContainer" containerID="61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.306867 5065 scope.go:117] "RemoveContainer" containerID="eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.315055 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-config-data" (OuterVolumeSpecName: "config-data") pod "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" (UID: "97358ac5-7f8d-4a1c-ae8a-9f981730c1b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.325096 5065 scope.go:117] "RemoveContainer" containerID="ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.343237 5065 scope.go:117] "RemoveContainer" containerID="15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908" Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.343626 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908\": container with ID starting with 15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908 not found: ID does not exist" containerID="15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.343670 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908"} err="failed to get container status \"15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908\": rpc error: code = NotFound desc = could not find container \"15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908\": container with ID starting with 15f53413bf745e48057f4ae3b8302183aa8bea3fed64ac50f9fab2de6c5f1908 not found: ID does not exist" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.343697 5065 scope.go:117] "RemoveContainer" containerID="61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639" Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.344059 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639\": container with ID starting with 61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639 not found: ID does not exist" containerID="61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.344083 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639"} err="failed to get container status \"61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639\": rpc error: code = NotFound desc = could not find container \"61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639\": container with ID starting with 61cbc642a10bdde48e29aac4d484ad237361c92bbcd5333e7f94cbd562cff639 not found: ID does not exist" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.344100 5065 scope.go:117] "RemoveContainer" containerID="eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68" Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.344371 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68\": container with ID starting with eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68 not found: ID does not exist" containerID="eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.344429 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68"} err="failed to get container status \"eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68\": rpc error: code = NotFound desc = could not find container \"eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68\": container with ID starting with eceba4e5ff4a7b0a17a482280aa926db083c1840f8a52b7bb5f54470b000fd68 not found: ID does not exist" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.344455 5065 scope.go:117] "RemoveContainer" containerID="ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa" Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.344739 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa\": container with ID starting with ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa not found: ID does not exist" containerID="ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.344771 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa"} err="failed to get container status \"ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa\": rpc error: code = NotFound desc = could not find container \"ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa\": container with ID starting with ba7b8a862105c0b0b8721e1c3638070413db61428e0120dfd34f30372195dffa not found: ID does not exist" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.370168 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.535556 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.546296 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.555474 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.555899 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-notification-agent" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559567 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-notification-agent" Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.559600 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-central-agent" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559607 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-central-agent" Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.559629 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="proxy-httpd" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559635 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="proxy-httpd" Oct 08 13:39:08 crc kubenswrapper[5065]: E1008 13:39:08.559654 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="sg-core" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559659 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="sg-core" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559946 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="sg-core" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559959 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-notification-agent" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559973 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="proxy-httpd" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.559984 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" containerName="ceilometer-central-agent" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.561909 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.565382 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.565637 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.565788 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.574248 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.676755 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-run-httpd\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.676852 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jcv\" (UniqueName: \"kubernetes.io/projected/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-kube-api-access-45jcv\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.676901 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.676931 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-log-httpd\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.676983 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.677022 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-scripts\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.677061 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.677089 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-config-data\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778386 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jcv\" (UniqueName: \"kubernetes.io/projected/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-kube-api-access-45jcv\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778459 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778479 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-log-httpd\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778522 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778554 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-scripts\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778589 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778610 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-config-data\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.778672 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-run-httpd\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.779062 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-log-httpd\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.779098 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-run-httpd\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.782468 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.784677 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.784716 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-scripts\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.785187 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.787040 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-config-data\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.797097 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jcv\" (UniqueName: \"kubernetes.io/projected/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-kube-api-access-45jcv\") pod \"ceilometer-0\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " pod="openstack/ceilometer-0" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.887205 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97358ac5-7f8d-4a1c-ae8a-9f981730c1b2" path="/var/lib/kubelet/pods/97358ac5-7f8d-4a1c-ae8a-9f981730c1b2/volumes" Oct 08 13:39:08 crc kubenswrapper[5065]: I1008 13:39:08.893069 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:09 crc kubenswrapper[5065]: I1008 13:39:09.353085 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:09 crc kubenswrapper[5065]: W1008 13:39:09.357279 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ac4d4d6_b3dc_44f0_a2f2_115c5c130314.slice/crio-8d9ebc0ab121bbbba8e1318dc1ec74d0eabd671d68f8126235b61d36a77e1522 WatchSource:0}: Error finding container 8d9ebc0ab121bbbba8e1318dc1ec74d0eabd671d68f8126235b61d36a77e1522: Status 404 returned error can't find the container with id 8d9ebc0ab121bbbba8e1318dc1ec74d0eabd671d68f8126235b61d36a77e1522 Oct 08 13:39:10 crc kubenswrapper[5065]: I1008 13:39:10.212433 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerStarted","Data":"8d9ebc0ab121bbbba8e1318dc1ec74d0eabd671d68f8126235b61d36a77e1522"} Oct 08 13:39:11 crc kubenswrapper[5065]: I1008 13:39:11.223382 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerStarted","Data":"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de"} Oct 08 13:39:11 crc kubenswrapper[5065]: I1008 13:39:11.224027 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerStarted","Data":"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61"} Oct 08 13:39:12 crc kubenswrapper[5065]: I1008 13:39:12.234615 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerStarted","Data":"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c"} Oct 08 13:39:12 crc kubenswrapper[5065]: I1008 13:39:12.529624 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.238677 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.245833 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.249001 5065 generic.go:334] "Generic (PLEG): container finished" podID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerID="c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5" exitCode=137 Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.249075 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d196d62-fd18-4261-9fc8-1f23060a4848","Type":"ContainerDied","Data":"c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5"} Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.249106 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d196d62-fd18-4261-9fc8-1f23060a4848","Type":"ContainerDied","Data":"1bf1f102693b464fd1cea2e7ecb0fb0fa5cde4bbc608f9c017d9b23c69b27429"} Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.249127 5065 scope.go:117] "RemoveContainer" containerID="c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.249274 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.251877 5065 generic.go:334] "Generic (PLEG): container finished" podID="bcbf9901-3fec-44c5-8560-6c208a0ca8a6" containerID="20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0" exitCode=137 Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.251915 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bcbf9901-3fec-44c5-8560-6c208a0ca8a6","Type":"ContainerDied","Data":"20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0"} Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.251942 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bcbf9901-3fec-44c5-8560-6c208a0ca8a6","Type":"ContainerDied","Data":"b991b7a5d705cd9561a4e6938e5e3bd73dddab744fb57205573bffa578a9ab84"} Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.252153 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.260543 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9s45\" (UniqueName: \"kubernetes.io/projected/1d196d62-fd18-4261-9fc8-1f23060a4848-kube-api-access-b9s45\") pod \"1d196d62-fd18-4261-9fc8-1f23060a4848\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.260591 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-combined-ca-bundle\") pod \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.260628 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-combined-ca-bundle\") pod \"1d196d62-fd18-4261-9fc8-1f23060a4848\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.260771 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjn9w\" (UniqueName: \"kubernetes.io/projected/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-kube-api-access-kjn9w\") pod \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.260800 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d196d62-fd18-4261-9fc8-1f23060a4848-logs\") pod \"1d196d62-fd18-4261-9fc8-1f23060a4848\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.260819 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-config-data\") pod \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\" (UID: \"bcbf9901-3fec-44c5-8560-6c208a0ca8a6\") " Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.260905 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-config-data\") pod \"1d196d62-fd18-4261-9fc8-1f23060a4848\" (UID: \"1d196d62-fd18-4261-9fc8-1f23060a4848\") " Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.265797 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d196d62-fd18-4261-9fc8-1f23060a4848-logs" (OuterVolumeSpecName: "logs") pod "1d196d62-fd18-4261-9fc8-1f23060a4848" (UID: "1d196d62-fd18-4261-9fc8-1f23060a4848"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.268355 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d196d62-fd18-4261-9fc8-1f23060a4848-kube-api-access-b9s45" (OuterVolumeSpecName: "kube-api-access-b9s45") pod "1d196d62-fd18-4261-9fc8-1f23060a4848" (UID: "1d196d62-fd18-4261-9fc8-1f23060a4848"). InnerVolumeSpecName "kube-api-access-b9s45". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.270337 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-kube-api-access-kjn9w" (OuterVolumeSpecName: "kube-api-access-kjn9w") pod "bcbf9901-3fec-44c5-8560-6c208a0ca8a6" (UID: "bcbf9901-3fec-44c5-8560-6c208a0ca8a6"). InnerVolumeSpecName "kube-api-access-kjn9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.294562 5065 scope.go:117] "RemoveContainer" containerID="c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.298349 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-config-data" (OuterVolumeSpecName: "config-data") pod "bcbf9901-3fec-44c5-8560-6c208a0ca8a6" (UID: "bcbf9901-3fec-44c5-8560-6c208a0ca8a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.300329 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d196d62-fd18-4261-9fc8-1f23060a4848" (UID: "1d196d62-fd18-4261-9fc8-1f23060a4848"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.328476 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-config-data" (OuterVolumeSpecName: "config-data") pod "1d196d62-fd18-4261-9fc8-1f23060a4848" (UID: "1d196d62-fd18-4261-9fc8-1f23060a4848"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.335605 5065 scope.go:117] "RemoveContainer" containerID="c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5" Oct 08 13:39:13 crc kubenswrapper[5065]: E1008 13:39:13.337164 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5\": container with ID starting with c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5 not found: ID does not exist" containerID="c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.337207 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5"} err="failed to get container status \"c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5\": rpc error: code = NotFound desc = could not find container \"c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5\": container with ID starting with c3553a922499886d8defb258ef862b6b3474e901f2471c50a8abf7cfc95dece5 not found: ID does not exist" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.337237 5065 scope.go:117] "RemoveContainer" containerID="c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308" Oct 08 13:39:13 crc kubenswrapper[5065]: E1008 13:39:13.338492 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308\": container with ID starting with c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308 not found: ID does not exist" containerID="c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.338528 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308"} err="failed to get container status \"c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308\": rpc error: code = NotFound desc = could not find container \"c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308\": container with ID starting with c9a33aa28f9f5d9b5d3619cd24d05b298e0894c94d8a9d3808ffa20919248308 not found: ID does not exist" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.338553 5065 scope.go:117] "RemoveContainer" containerID="20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.358388 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcbf9901-3fec-44c5-8560-6c208a0ca8a6" (UID: "bcbf9901-3fec-44c5-8560-6c208a0ca8a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363035 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjn9w\" (UniqueName: \"kubernetes.io/projected/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-kube-api-access-kjn9w\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363069 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d196d62-fd18-4261-9fc8-1f23060a4848-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363082 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363093 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363101 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9s45\" (UniqueName: \"kubernetes.io/projected/1d196d62-fd18-4261-9fc8-1f23060a4848-kube-api-access-b9s45\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363109 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbf9901-3fec-44c5-8560-6c208a0ca8a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363117 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d196d62-fd18-4261-9fc8-1f23060a4848-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363043 5065 scope.go:117] "RemoveContainer" containerID="20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0" Oct 08 13:39:13 crc kubenswrapper[5065]: E1008 13:39:13.363525 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0\": container with ID starting with 20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0 not found: ID does not exist" containerID="20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.363563 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0"} err="failed to get container status \"20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0\": rpc error: code = NotFound desc = could not find container \"20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0\": container with ID starting with 20cd2498cde3222989320920e532311b987c5fa645e47651aa86ae3eff5509c0 not found: ID does not exist" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.592195 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.605328 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.615840 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.627111 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.639729 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: E1008 13:39:13.640665 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-metadata" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.640832 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-metadata" Oct 08 13:39:13 crc kubenswrapper[5065]: E1008 13:39:13.640981 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-log" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.641092 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-log" Oct 08 13:39:13 crc kubenswrapper[5065]: E1008 13:39:13.641232 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbf9901-3fec-44c5-8560-6c208a0ca8a6" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.641365 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbf9901-3fec-44c5-8560-6c208a0ca8a6" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.641856 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-log" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.642022 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcbf9901-3fec-44c5-8560-6c208a0ca8a6" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.642157 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" containerName="nova-metadata-metadata" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.643725 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.649079 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.650505 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.650967 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.652031 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.654022 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.654091 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.654205 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.658766 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.666145 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667577 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-config-data\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667626 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667658 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667685 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667703 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2d114f-59e8-480c-a2ca-87d07079b460-logs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667730 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667765 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852zs\" (UniqueName: \"kubernetes.io/projected/1f2d114f-59e8-480c-a2ca-87d07079b460-kube-api-access-852zs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667797 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447m6\" (UniqueName: \"kubernetes.io/projected/6470de54-fdec-4648-b941-1031c67f55ca-kube-api-access-447m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667830 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.667860 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.769378 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-config-data\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.769538 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.769648 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.769689 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.770104 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2d114f-59e8-480c-a2ca-87d07079b460-logs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.770166 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.770223 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-852zs\" (UniqueName: \"kubernetes.io/projected/1f2d114f-59e8-480c-a2ca-87d07079b460-kube-api-access-852zs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.770274 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-447m6\" (UniqueName: \"kubernetes.io/projected/6470de54-fdec-4648-b941-1031c67f55ca-kube-api-access-447m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.770327 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.770375 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.771259 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2d114f-59e8-480c-a2ca-87d07079b460-logs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.773320 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-config-data\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.774328 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.778139 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.778264 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.780106 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.783001 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.783967 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.791934 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-447m6\" (UniqueName: \"kubernetes.io/projected/6470de54-fdec-4648-b941-1031c67f55ca-kube-api-access-447m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.792763 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-852zs\" (UniqueName: \"kubernetes.io/projected/1f2d114f-59e8-480c-a2ca-87d07079b460-kube-api-access-852zs\") pod \"nova-metadata-0\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " pod="openstack/nova-metadata-0" Oct 08 13:39:13 crc kubenswrapper[5065]: I1008 13:39:13.977530 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.000594 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.265875 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerStarted","Data":"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d"} Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.266367 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.291592 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.621241668 podStartE2EDuration="6.291570543s" podCreationTimestamp="2025-10-08 13:39:08 +0000 UTC" firstStartedPulling="2025-10-08 13:39:09.36007771 +0000 UTC m=+1251.137459467" lastFinishedPulling="2025-10-08 13:39:13.030406575 +0000 UTC m=+1254.807788342" observedRunningTime="2025-10-08 13:39:14.287027535 +0000 UTC m=+1256.064409302" watchObservedRunningTime="2025-10-08 13:39:14.291570543 +0000 UTC m=+1256.068952310" Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.451227 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:39:14 crc kubenswrapper[5065]: W1008 13:39:14.523260 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f2d114f_59e8_480c_a2ca_87d07079b460.slice/crio-2cd86d04123f241feeec881a222f45a35514dfdaccd0b46480f00d319976b930 WatchSource:0}: Error finding container 2cd86d04123f241feeec881a222f45a35514dfdaccd0b46480f00d319976b930: Status 404 returned error can't find the container with id 2cd86d04123f241feeec881a222f45a35514dfdaccd0b46480f00d319976b930 Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.525563 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.884807 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d196d62-fd18-4261-9fc8-1f23060a4848" path="/var/lib/kubelet/pods/1d196d62-fd18-4261-9fc8-1f23060a4848/volumes" Oct 08 13:39:14 crc kubenswrapper[5065]: I1008 13:39:14.885813 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcbf9901-3fec-44c5-8560-6c208a0ca8a6" path="/var/lib/kubelet/pods/bcbf9901-3fec-44c5-8560-6c208a0ca8a6/volumes" Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.280274 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6470de54-fdec-4648-b941-1031c67f55ca","Type":"ContainerStarted","Data":"51d59f6c1a32d7f9548db66179bc1cdd29b957b043fcdf8c829de549616ab078"} Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.280317 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6470de54-fdec-4648-b941-1031c67f55ca","Type":"ContainerStarted","Data":"47af1d66508e4273edebc4fd31c76944939d16540d68c065850969255415e6df"} Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.284864 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f2d114f-59e8-480c-a2ca-87d07079b460","Type":"ContainerStarted","Data":"a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958"} Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.284906 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f2d114f-59e8-480c-a2ca-87d07079b460","Type":"ContainerStarted","Data":"15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d"} Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.284923 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f2d114f-59e8-480c-a2ca-87d07079b460","Type":"ContainerStarted","Data":"2cd86d04123f241feeec881a222f45a35514dfdaccd0b46480f00d319976b930"} Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.315054 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.315006065 podStartE2EDuration="2.315006065s" podCreationTimestamp="2025-10-08 13:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:15.303189862 +0000 UTC m=+1257.080571639" watchObservedRunningTime="2025-10-08 13:39:15.315006065 +0000 UTC m=+1257.092387822" Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.335504 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.335482711 podStartE2EDuration="2.335482711s" podCreationTimestamp="2025-10-08 13:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:15.328730501 +0000 UTC m=+1257.106112298" watchObservedRunningTime="2025-10-08 13:39:15.335482711 +0000 UTC m=+1257.112864478" Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.409793 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.410981 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.414032 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 13:39:15 crc kubenswrapper[5065]: I1008 13:39:15.415973 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.294973 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.298501 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.516690 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-76vsq"] Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.518189 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.531052 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-76vsq"] Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.629611 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-config\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.629672 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvsnv\" (UniqueName: \"kubernetes.io/projected/18710aa1-a99f-421b-9a4f-694362061773-kube-api-access-pvsnv\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.629696 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.629902 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.630118 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.630142 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.731911 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.732036 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.732057 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.732123 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-config\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.732154 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvsnv\" (UniqueName: \"kubernetes.io/projected/18710aa1-a99f-421b-9a4f-694362061773-kube-api-access-pvsnv\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.732169 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.733056 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.733177 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.733279 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-config\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.733343 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.733837 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.753734 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvsnv\" (UniqueName: \"kubernetes.io/projected/18710aa1-a99f-421b-9a4f-694362061773-kube-api-access-pvsnv\") pod \"dnsmasq-dns-6d4d96bb9-76vsq\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:16 crc kubenswrapper[5065]: I1008 13:39:16.863780 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:17 crc kubenswrapper[5065]: I1008 13:39:17.373004 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-76vsq"] Oct 08 13:39:18 crc kubenswrapper[5065]: I1008 13:39:18.316884 5065 generic.go:334] "Generic (PLEG): container finished" podID="18710aa1-a99f-421b-9a4f-694362061773" containerID="2482d7697e88b5d83d42327c308805cfe4c53b7df827744880d0db1345958ec8" exitCode=0 Oct 08 13:39:18 crc kubenswrapper[5065]: I1008 13:39:18.318507 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" event={"ID":"18710aa1-a99f-421b-9a4f-694362061773","Type":"ContainerDied","Data":"2482d7697e88b5d83d42327c308805cfe4c53b7df827744880d0db1345958ec8"} Oct 08 13:39:18 crc kubenswrapper[5065]: I1008 13:39:18.318542 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" event={"ID":"18710aa1-a99f-421b-9a4f-694362061773","Type":"ContainerStarted","Data":"ee43953594f20a81e11b57b199f2d95d7500161cb71820143c358c1599be5885"} Oct 08 13:39:18 crc kubenswrapper[5065]: I1008 13:39:18.810099 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:18 crc kubenswrapper[5065]: I1008 13:39:18.977931 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 13:39:18 crc kubenswrapper[5065]: I1008 13:39:18.977988 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.000985 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.266037 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.266597 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-central-agent" containerID="cri-o://2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" gracePeriod=30 Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.266675 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-notification-agent" containerID="cri-o://b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" gracePeriod=30 Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.266690 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="sg-core" containerID="cri-o://6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" gracePeriod=30 Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.266655 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="proxy-httpd" containerID="cri-o://315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" gracePeriod=30 Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.327448 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" event={"ID":"18710aa1-a99f-421b-9a4f-694362061773","Type":"ContainerStarted","Data":"f184d017af97fa8ec5dc9b650dbdc3311a404638bdaa61798fc73a62a8d532a3"} Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.327818 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-log" containerID="cri-o://4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae" gracePeriod=30 Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.327870 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-api" containerID="cri-o://873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e" gracePeriod=30 Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.355578 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" podStartSLOduration=3.355561436 podStartE2EDuration="3.355561436s" podCreationTimestamp="2025-10-08 13:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:19.348530929 +0000 UTC m=+1261.125912716" watchObservedRunningTime="2025-10-08 13:39:19.355561436 +0000 UTC m=+1261.132943213" Oct 08 13:39:19 crc kubenswrapper[5065]: I1008 13:39:19.974104 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.136624 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-scripts\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.136976 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-sg-core-conf-yaml\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137050 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-combined-ca-bundle\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137078 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-log-httpd\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137152 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-run-httpd\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137199 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jcv\" (UniqueName: \"kubernetes.io/projected/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-kube-api-access-45jcv\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137229 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-ceilometer-tls-certs\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137257 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-config-data\") pod \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\" (UID: \"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314\") " Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137518 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137646 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.137772 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.143119 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-kube-api-access-45jcv" (OuterVolumeSpecName: "kube-api-access-45jcv") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "kube-api-access-45jcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.143393 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-scripts" (OuterVolumeSpecName: "scripts") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.178861 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.199478 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.238335 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.238367 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jcv\" (UniqueName: \"kubernetes.io/projected/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-kube-api-access-45jcv\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.238377 5065 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.238385 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.238394 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.241031 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.242753 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-config-data" (OuterVolumeSpecName: "config-data") pod "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" (UID: "0ac4d4d6-b3dc-44f0-a2f2-115c5c130314"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339096 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerID="315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" exitCode=0 Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339129 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerID="6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" exitCode=2 Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339139 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerDied","Data":"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d"} Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339161 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerID="b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" exitCode=0 Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339172 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerID="2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" exitCode=0 Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339175 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerDied","Data":"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c"} Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339181 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339201 5065 scope.go:117] "RemoveContainer" containerID="315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339189 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerDied","Data":"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de"} Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339319 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerDied","Data":"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61"} Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339370 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ac4d4d6-b3dc-44f0-a2f2-115c5c130314","Type":"ContainerDied","Data":"8d9ebc0ab121bbbba8e1318dc1ec74d0eabd671d68f8126235b61d36a77e1522"} Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.339977 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.340001 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.342632 5065 generic.go:334] "Generic (PLEG): container finished" podID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerID="4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae" exitCode=143 Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.342699 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02f56ba4-d900-4098-90f2-ae5ccd32357f","Type":"ContainerDied","Data":"4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae"} Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.343038 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.365668 5065 scope.go:117] "RemoveContainer" containerID="6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.395556 5065 scope.go:117] "RemoveContainer" containerID="b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.401899 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.426607 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.434967 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.435540 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="sg-core" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.435656 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="sg-core" Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.435759 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="proxy-httpd" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.435845 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="proxy-httpd" Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.435939 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-central-agent" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.436015 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-central-agent" Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.436101 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-notification-agent" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.436180 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-notification-agent" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.436530 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="sg-core" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.436620 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="proxy-httpd" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.436700 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-notification-agent" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.436795 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" containerName="ceilometer-central-agent" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.439781 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.443765 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.444082 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.444930 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.446930 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.505258 5065 scope.go:117] "RemoveContainer" containerID="2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.531453 5065 scope.go:117] "RemoveContainer" containerID="315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.531976 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": container with ID starting with 315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d not found: ID does not exist" containerID="315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532011 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d"} err="failed to get container status \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": rpc error: code = NotFound desc = could not find container \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": container with ID starting with 315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532040 5065 scope.go:117] "RemoveContainer" containerID="6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.532367 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": container with ID starting with 6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c not found: ID does not exist" containerID="6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532389 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c"} err="failed to get container status \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": rpc error: code = NotFound desc = could not find container \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": container with ID starting with 6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532402 5065 scope.go:117] "RemoveContainer" containerID="b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.532624 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": container with ID starting with b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de not found: ID does not exist" containerID="b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532660 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de"} err="failed to get container status \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": rpc error: code = NotFound desc = could not find container \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": container with ID starting with b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532672 5065 scope.go:117] "RemoveContainer" containerID="2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" Oct 08 13:39:20 crc kubenswrapper[5065]: E1008 13:39:20.532929 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": container with ID starting with 2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61 not found: ID does not exist" containerID="2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532953 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61"} err="failed to get container status \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": rpc error: code = NotFound desc = could not find container \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": container with ID starting with 2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61 not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.532965 5065 scope.go:117] "RemoveContainer" containerID="315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.533150 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d"} err="failed to get container status \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": rpc error: code = NotFound desc = could not find container \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": container with ID starting with 315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.533170 5065 scope.go:117] "RemoveContainer" containerID="6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.533366 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c"} err="failed to get container status \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": rpc error: code = NotFound desc = could not find container \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": container with ID starting with 6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.533390 5065 scope.go:117] "RemoveContainer" containerID="b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.533773 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de"} err="failed to get container status \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": rpc error: code = NotFound desc = could not find container \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": container with ID starting with b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.533791 5065 scope.go:117] "RemoveContainer" containerID="2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534104 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61"} err="failed to get container status \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": rpc error: code = NotFound desc = could not find container \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": container with ID starting with 2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61 not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534122 5065 scope.go:117] "RemoveContainer" containerID="315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534401 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d"} err="failed to get container status \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": rpc error: code = NotFound desc = could not find container \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": container with ID starting with 315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534472 5065 scope.go:117] "RemoveContainer" containerID="6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534665 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c"} err="failed to get container status \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": rpc error: code = NotFound desc = could not find container \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": container with ID starting with 6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534683 5065 scope.go:117] "RemoveContainer" containerID="b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534873 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de"} err="failed to get container status \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": rpc error: code = NotFound desc = could not find container \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": container with ID starting with b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.534893 5065 scope.go:117] "RemoveContainer" containerID="2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535084 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61"} err="failed to get container status \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": rpc error: code = NotFound desc = could not find container \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": container with ID starting with 2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61 not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535101 5065 scope.go:117] "RemoveContainer" containerID="315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535254 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d"} err="failed to get container status \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": rpc error: code = NotFound desc = could not find container \"315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d\": container with ID starting with 315c6b8c1d8cf469253d45a142555147b48991b8d47e0c6752fa25548b483c6d not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535271 5065 scope.go:117] "RemoveContainer" containerID="6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535399 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c"} err="failed to get container status \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": rpc error: code = NotFound desc = could not find container \"6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c\": container with ID starting with 6ae9876afcf2c403229072c5e85e07e62dae9f788a5cb7da780dbeafabab540c not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535427 5065 scope.go:117] "RemoveContainer" containerID="b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535565 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de"} err="failed to get container status \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": rpc error: code = NotFound desc = could not find container \"b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de\": container with ID starting with b73aab86b65c16a577cef4515c8b42806b6c106cbf66ad5256be74137c87d1de not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535592 5065 scope.go:117] "RemoveContainer" containerID="2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.535771 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61"} err="failed to get container status \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": rpc error: code = NotFound desc = could not find container \"2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61\": container with ID starting with 2f730c3edde2a63b26746a87f4b7813fdeff920849ad0867a08e30540483bd61 not found: ID does not exist" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.542906 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.543011 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-log-httpd\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.543042 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.543064 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshm6\" (UniqueName: \"kubernetes.io/projected/4829c2a2-c022-4d7a-86eb-38134480c008-kube-api-access-vshm6\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.543083 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-run-httpd\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.543119 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.543134 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-config-data\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.543197 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-scripts\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.644973 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-scripts\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.645270 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.645351 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-log-httpd\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.645387 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.645475 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshm6\" (UniqueName: \"kubernetes.io/projected/4829c2a2-c022-4d7a-86eb-38134480c008-kube-api-access-vshm6\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.645528 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-run-httpd\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.645559 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.645577 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-config-data\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.646574 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-run-httpd\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.646584 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-log-httpd\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.649481 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.649928 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.652273 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-scripts\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.653212 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.654020 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-config-data\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.665489 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshm6\" (UniqueName: \"kubernetes.io/projected/4829c2a2-c022-4d7a-86eb-38134480c008-kube-api-access-vshm6\") pod \"ceilometer-0\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.795052 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:20 crc kubenswrapper[5065]: I1008 13:39:20.888770 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ac4d4d6-b3dc-44f0-a2f2-115c5c130314" path="/var/lib/kubelet/pods/0ac4d4d6-b3dc-44f0-a2f2-115c5c130314/volumes" Oct 08 13:39:21 crc kubenswrapper[5065]: I1008 13:39:21.035228 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:21 crc kubenswrapper[5065]: I1008 13:39:21.223730 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:21 crc kubenswrapper[5065]: I1008 13:39:21.356220 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerStarted","Data":"894b26825423a12da1b3dbf68f31f175a0d214100190737bb33087b07894b93d"} Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.370338 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerStarted","Data":"1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246"} Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.790071 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.895408 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-config-data\") pod \"02f56ba4-d900-4098-90f2-ae5ccd32357f\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.895986 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f56ba4-d900-4098-90f2-ae5ccd32357f-logs\") pod \"02f56ba4-d900-4098-90f2-ae5ccd32357f\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.896222 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t467\" (UniqueName: \"kubernetes.io/projected/02f56ba4-d900-4098-90f2-ae5ccd32357f-kube-api-access-8t467\") pod \"02f56ba4-d900-4098-90f2-ae5ccd32357f\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.896451 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f56ba4-d900-4098-90f2-ae5ccd32357f-logs" (OuterVolumeSpecName: "logs") pod "02f56ba4-d900-4098-90f2-ae5ccd32357f" (UID: "02f56ba4-d900-4098-90f2-ae5ccd32357f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.896604 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-combined-ca-bundle\") pod \"02f56ba4-d900-4098-90f2-ae5ccd32357f\" (UID: \"02f56ba4-d900-4098-90f2-ae5ccd32357f\") " Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.901369 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f56ba4-d900-4098-90f2-ae5ccd32357f-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.917619 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f56ba4-d900-4098-90f2-ae5ccd32357f-kube-api-access-8t467" (OuterVolumeSpecName: "kube-api-access-8t467") pod "02f56ba4-d900-4098-90f2-ae5ccd32357f" (UID: "02f56ba4-d900-4098-90f2-ae5ccd32357f"). InnerVolumeSpecName "kube-api-access-8t467". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.924619 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-config-data" (OuterVolumeSpecName: "config-data") pod "02f56ba4-d900-4098-90f2-ae5ccd32357f" (UID: "02f56ba4-d900-4098-90f2-ae5ccd32357f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:22 crc kubenswrapper[5065]: I1008 13:39:22.935323 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02f56ba4-d900-4098-90f2-ae5ccd32357f" (UID: "02f56ba4-d900-4098-90f2-ae5ccd32357f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.003151 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.003189 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f56ba4-d900-4098-90f2-ae5ccd32357f-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.003204 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t467\" (UniqueName: \"kubernetes.io/projected/02f56ba4-d900-4098-90f2-ae5ccd32357f-kube-api-access-8t467\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.379894 5065 generic.go:334] "Generic (PLEG): container finished" podID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerID="873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e" exitCode=0 Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.379955 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02f56ba4-d900-4098-90f2-ae5ccd32357f","Type":"ContainerDied","Data":"873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e"} Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.380007 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.380469 5065 scope.go:117] "RemoveContainer" containerID="873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.380405 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02f56ba4-d900-4098-90f2-ae5ccd32357f","Type":"ContainerDied","Data":"6a0bc4551fe65e364a35270079b1832789905bcb2b2d39f1422c5c8d0f45296f"} Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.383928 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerStarted","Data":"1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4"} Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.383978 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerStarted","Data":"880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb"} Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.409679 5065 scope.go:117] "RemoveContainer" containerID="4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.413960 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.429223 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.435175 5065 scope.go:117] "RemoveContainer" containerID="873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e" Oct 08 13:39:23 crc kubenswrapper[5065]: E1008 13:39:23.435720 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e\": container with ID starting with 873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e not found: ID does not exist" containerID="873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.435775 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e"} err="failed to get container status \"873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e\": rpc error: code = NotFound desc = could not find container \"873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e\": container with ID starting with 873fae0652c62cbf7840c4b36b72575683e41bcb41b3efc76a35469e403d081e not found: ID does not exist" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.435802 5065 scope.go:117] "RemoveContainer" containerID="4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae" Oct 08 13:39:23 crc kubenswrapper[5065]: E1008 13:39:23.436227 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae\": container with ID starting with 4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae not found: ID does not exist" containerID="4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.436265 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae"} err="failed to get container status \"4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae\": rpc error: code = NotFound desc = could not find container \"4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae\": container with ID starting with 4a262fcc062d144fa0034ced110a8b73a8db79dbb941b0bd7caebe0b46ca2cae not found: ID does not exist" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.444777 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:23 crc kubenswrapper[5065]: E1008 13:39:23.445344 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-log" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.445367 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-log" Oct 08 13:39:23 crc kubenswrapper[5065]: E1008 13:39:23.445385 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-api" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.445394 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-api" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.445650 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-log" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.445682 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" containerName="nova-api-api" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.446904 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.449571 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.449951 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.453372 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.456913 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.515956 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.516350 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a5449e-5fed-41e2-9242-0a815e34eb73-logs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.516436 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.516465 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.516589 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jz6w\" (UniqueName: \"kubernetes.io/projected/09a5449e-5fed-41e2-9242-0a815e34eb73-kube-api-access-9jz6w\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.516627 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-config-data\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.627218 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-config-data\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.628155 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.628257 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a5449e-5fed-41e2-9242-0a815e34eb73-logs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.628348 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.628384 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.628517 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jz6w\" (UniqueName: \"kubernetes.io/projected/09a5449e-5fed-41e2-9242-0a815e34eb73-kube-api-access-9jz6w\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.628800 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a5449e-5fed-41e2-9242-0a815e34eb73-logs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.633233 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.633892 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-public-tls-certs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.639567 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.640764 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-config-data\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.656651 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jz6w\" (UniqueName: \"kubernetes.io/projected/09a5449e-5fed-41e2-9242-0a815e34eb73-kube-api-access-9jz6w\") pod \"nova-api-0\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.793898 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.977718 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 13:39:23 crc kubenswrapper[5065]: I1008 13:39:23.978070 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.001535 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.018836 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.255567 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:24 crc kubenswrapper[5065]: W1008 13:39:24.256249 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09a5449e_5fed_41e2_9242_0a815e34eb73.slice/crio-d2c6a906f3f453e18bef8087757e04403f19e53bd0ee85641a157e40b715f2ea WatchSource:0}: Error finding container d2c6a906f3f453e18bef8087757e04403f19e53bd0ee85641a157e40b715f2ea: Status 404 returned error can't find the container with id d2c6a906f3f453e18bef8087757e04403f19e53bd0ee85641a157e40b715f2ea Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.375222 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.375286 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.393272 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a5449e-5fed-41e2-9242-0a815e34eb73","Type":"ContainerStarted","Data":"d2c6a906f3f453e18bef8087757e04403f19e53bd0ee85641a157e40b715f2ea"} Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.419237 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.571070 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qn4n5"] Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.579376 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.582955 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.583161 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.587116 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn4n5"] Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.661969 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-config-data\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.662075 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-scripts\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.662100 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fcwb\" (UniqueName: \"kubernetes.io/projected/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-kube-api-access-9fcwb\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.662211 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.764190 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.764326 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-config-data\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.764393 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-scripts\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.764427 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fcwb\" (UniqueName: \"kubernetes.io/projected/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-kube-api-access-9fcwb\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.768358 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-config-data\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.771300 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.778942 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-scripts\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.783251 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fcwb\" (UniqueName: \"kubernetes.io/projected/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-kube-api-access-9fcwb\") pod \"nova-cell1-cell-mapping-qn4n5\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.887066 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f56ba4-d900-4098-90f2-ae5ccd32357f" path="/var/lib/kubelet/pods/02f56ba4-d900-4098-90f2-ae5ccd32357f/volumes" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.973407 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.998663 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:24 crc kubenswrapper[5065]: I1008 13:39:24.998663 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.407605 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a5449e-5fed-41e2-9242-0a815e34eb73","Type":"ContainerStarted","Data":"6137ea6f1340ce791d5379bb5dbaff8b9d90119ca34fd5227f1f557cc453f697"} Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.407950 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a5449e-5fed-41e2-9242-0a815e34eb73","Type":"ContainerStarted","Data":"9e836df2e572d16ba92eaaa2a62d68e4736cf3eb3b8d984df8166d8c0a435654"} Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.414663 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-central-agent" containerID="cri-o://1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246" gracePeriod=30 Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.414979 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerStarted","Data":"b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c"} Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.415037 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.415087 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="proxy-httpd" containerID="cri-o://b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c" gracePeriod=30 Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.415150 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="sg-core" containerID="cri-o://1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4" gracePeriod=30 Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.415199 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-notification-agent" containerID="cri-o://880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb" gracePeriod=30 Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.430649 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.4306283349999998 podStartE2EDuration="2.430628335s" podCreationTimestamp="2025-10-08 13:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:25.429312538 +0000 UTC m=+1267.206694305" watchObservedRunningTime="2025-10-08 13:39:25.430628335 +0000 UTC m=+1267.208010092" Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.474290 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.24143668 podStartE2EDuration="5.474266314s" podCreationTimestamp="2025-10-08 13:39:20 +0000 UTC" firstStartedPulling="2025-10-08 13:39:21.228639557 +0000 UTC m=+1263.006021314" lastFinishedPulling="2025-10-08 13:39:24.461469191 +0000 UTC m=+1266.238850948" observedRunningTime="2025-10-08 13:39:25.465078856 +0000 UTC m=+1267.242460623" watchObservedRunningTime="2025-10-08 13:39:25.474266314 +0000 UTC m=+1267.251648071" Oct 08 13:39:25 crc kubenswrapper[5065]: W1008 13:39:25.492852 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0b3a4e4_af9e_4625_af67_2631d39d0a0b.slice/crio-0523134df25f278a7203920e5f529e6539530b5014c98e8da02b9eb9526d6dd8 WatchSource:0}: Error finding container 0523134df25f278a7203920e5f529e6539530b5014c98e8da02b9eb9526d6dd8: Status 404 returned error can't find the container with id 0523134df25f278a7203920e5f529e6539530b5014c98e8da02b9eb9526d6dd8 Oct 08 13:39:25 crc kubenswrapper[5065]: I1008 13:39:25.496893 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn4n5"] Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.424129 5065 generic.go:334] "Generic (PLEG): container finished" podID="4829c2a2-c022-4d7a-86eb-38134480c008" containerID="b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c" exitCode=0 Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.424445 5065 generic.go:334] "Generic (PLEG): container finished" podID="4829c2a2-c022-4d7a-86eb-38134480c008" containerID="1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4" exitCode=2 Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.424453 5065 generic.go:334] "Generic (PLEG): container finished" podID="4829c2a2-c022-4d7a-86eb-38134480c008" containerID="880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb" exitCode=0 Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.424487 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerDied","Data":"b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c"} Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.424511 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerDied","Data":"1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4"} Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.424520 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerDied","Data":"880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb"} Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.427278 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn4n5" event={"ID":"e0b3a4e4-af9e-4625-af67-2631d39d0a0b","Type":"ContainerStarted","Data":"5f69ab5014e9ea4bb50f1b65d4593a168bb3d6a2c7e305250282dedb882013c5"} Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.427313 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn4n5" event={"ID":"e0b3a4e4-af9e-4625-af67-2631d39d0a0b","Type":"ContainerStarted","Data":"0523134df25f278a7203920e5f529e6539530b5014c98e8da02b9eb9526d6dd8"} Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.865641 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.900001 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qn4n5" podStartSLOduration=2.8999829950000002 podStartE2EDuration="2.899982995s" podCreationTimestamp="2025-10-08 13:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:26.447863462 +0000 UTC m=+1268.225245219" watchObservedRunningTime="2025-10-08 13:39:26.899982995 +0000 UTC m=+1268.677364752" Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.974740 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-nzm7z"] Oct 08 13:39:26 crc kubenswrapper[5065]: I1008 13:39:26.975076 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" podUID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerName="dnsmasq-dns" containerID="cri-o://123d0d3503c977fb8e29547b00bade3ab25c9379036637b70e294431d5b56bf0" gracePeriod=10 Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.438938 5065 generic.go:334] "Generic (PLEG): container finished" podID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerID="123d0d3503c977fb8e29547b00bade3ab25c9379036637b70e294431d5b56bf0" exitCode=0 Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.439260 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" event={"ID":"ae6f2d1c-608d-4ad2-9056-80bb29eac06c","Type":"ContainerDied","Data":"123d0d3503c977fb8e29547b00bade3ab25c9379036637b70e294431d5b56bf0"} Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.441936 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.444250 5065 generic.go:334] "Generic (PLEG): container finished" podID="4829c2a2-c022-4d7a-86eb-38134480c008" containerID="1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246" exitCode=0 Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.444507 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerDied","Data":"1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246"} Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.444544 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4829c2a2-c022-4d7a-86eb-38134480c008","Type":"ContainerDied","Data":"894b26825423a12da1b3dbf68f31f175a0d214100190737bb33087b07894b93d"} Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.444561 5065 scope.go:117] "RemoveContainer" containerID="b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.516219 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-sg-core-conf-yaml\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.516562 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-run-httpd\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.516658 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-scripts\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.516738 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-log-httpd\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.517010 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vshm6\" (UniqueName: \"kubernetes.io/projected/4829c2a2-c022-4d7a-86eb-38134480c008-kube-api-access-vshm6\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.517091 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-ceilometer-tls-certs\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.517160 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-config-data\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.517253 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-combined-ca-bundle\") pod \"4829c2a2-c022-4d7a-86eb-38134480c008\" (UID: \"4829c2a2-c022-4d7a-86eb-38134480c008\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.520743 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.521066 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.524361 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-scripts" (OuterVolumeSpecName: "scripts") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.540530 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.541231 5065 scope.go:117] "RemoveContainer" containerID="1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.545292 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4829c2a2-c022-4d7a-86eb-38134480c008-kube-api-access-vshm6" (OuterVolumeSpecName: "kube-api-access-vshm6") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "kube-api-access-vshm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.572848 5065 scope.go:117] "RemoveContainer" containerID="880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.587930 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.599805 5065 scope.go:117] "RemoveContainer" containerID="1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.613198 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.618375 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-nb\") pod \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.620721 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-sb\") pod \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.620805 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-config\") pod \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.620884 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-swift-storage-0\") pod \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.620952 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-svc\") pod \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.620971 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn5tv\" (UniqueName: \"kubernetes.io/projected/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-kube-api-access-vn5tv\") pod \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\" (UID: \"ae6f2d1c-608d-4ad2-9056-80bb29eac06c\") " Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.621853 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.621868 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.621877 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.621886 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4829c2a2-c022-4d7a-86eb-38134480c008-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.621894 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vshm6\" (UniqueName: \"kubernetes.io/projected/4829c2a2-c022-4d7a-86eb-38134480c008-kube-api-access-vshm6\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.621903 5065 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.629189 5065 scope.go:117] "RemoveContainer" containerID="b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c" Oct 08 13:39:27 crc kubenswrapper[5065]: E1008 13:39:27.654156 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c\": container with ID starting with b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c not found: ID does not exist" containerID="b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.654231 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c"} err="failed to get container status \"b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c\": rpc error: code = NotFound desc = could not find container \"b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c\": container with ID starting with b909eab5011a0d0fe1b732a1e24311d04093b8b92069a77757ddb4c0f96a1b3c not found: ID does not exist" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.654264 5065 scope.go:117] "RemoveContainer" containerID="1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4" Oct 08 13:39:27 crc kubenswrapper[5065]: E1008 13:39:27.666796 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4\": container with ID starting with 1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4 not found: ID does not exist" containerID="1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.666859 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4"} err="failed to get container status \"1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4\": rpc error: code = NotFound desc = could not find container \"1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4\": container with ID starting with 1f7b6398d2e594d0ce087ed7b05291d2e71b25f53a46c0781852edb462a5d3a4 not found: ID does not exist" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.666890 5065 scope.go:117] "RemoveContainer" containerID="880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.667185 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-kube-api-access-vn5tv" (OuterVolumeSpecName: "kube-api-access-vn5tv") pod "ae6f2d1c-608d-4ad2-9056-80bb29eac06c" (UID: "ae6f2d1c-608d-4ad2-9056-80bb29eac06c"). InnerVolumeSpecName "kube-api-access-vn5tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: E1008 13:39:27.694575 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb\": container with ID starting with 880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb not found: ID does not exist" containerID="880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.694630 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb"} err="failed to get container status \"880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb\": rpc error: code = NotFound desc = could not find container \"880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb\": container with ID starting with 880cd36cc640177cdd2ee9303a43adadd04d945782dc197321b00a0883cb6aeb not found: ID does not exist" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.694666 5065 scope.go:117] "RemoveContainer" containerID="1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246" Oct 08 13:39:27 crc kubenswrapper[5065]: E1008 13:39:27.696922 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246\": container with ID starting with 1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246 not found: ID does not exist" containerID="1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.696958 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246"} err="failed to get container status \"1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246\": rpc error: code = NotFound desc = could not find container \"1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246\": container with ID starting with 1d88533cb280739e118ebd1857bb4ad8e70a7358e9ff7a3b7599ec8929177246 not found: ID does not exist" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.723262 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn5tv\" (UniqueName: \"kubernetes.io/projected/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-kube-api-access-vn5tv\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.811635 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae6f2d1c-608d-4ad2-9056-80bb29eac06c" (UID: "ae6f2d1c-608d-4ad2-9056-80bb29eac06c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.827081 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.828248 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-config-data" (OuterVolumeSpecName: "config-data") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.834752 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4829c2a2-c022-4d7a-86eb-38134480c008" (UID: "4829c2a2-c022-4d7a-86eb-38134480c008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.850006 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-config" (OuterVolumeSpecName: "config") pod "ae6f2d1c-608d-4ad2-9056-80bb29eac06c" (UID: "ae6f2d1c-608d-4ad2-9056-80bb29eac06c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.854885 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae6f2d1c-608d-4ad2-9056-80bb29eac06c" (UID: "ae6f2d1c-608d-4ad2-9056-80bb29eac06c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.860150 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae6f2d1c-608d-4ad2-9056-80bb29eac06c" (UID: "ae6f2d1c-608d-4ad2-9056-80bb29eac06c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.882295 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae6f2d1c-608d-4ad2-9056-80bb29eac06c" (UID: "ae6f2d1c-608d-4ad2-9056-80bb29eac06c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.928603 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.928636 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.928645 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.928656 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.928664 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae6f2d1c-608d-4ad2-9056-80bb29eac06c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:27 crc kubenswrapper[5065]: I1008 13:39:27.928672 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4829c2a2-c022-4d7a-86eb-38134480c008-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.456933 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.470714 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" event={"ID":"ae6f2d1c-608d-4ad2-9056-80bb29eac06c","Type":"ContainerDied","Data":"753fa54c598ed5f4812d8a2d024a34f7c289e1850c5ab37541b2ccaf21f20079"} Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.470772 5065 scope.go:117] "RemoveContainer" containerID="123d0d3503c977fb8e29547b00bade3ab25c9379036637b70e294431d5b56bf0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.470906 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-nzm7z" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.497522 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.506168 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.519757 5065 scope.go:117] "RemoveContainer" containerID="3885d8ce070d1f851bef97e6ef89f12d92919a06dc4a356ba7016ff55246c4c8" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.529910 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:28 crc kubenswrapper[5065]: E1008 13:39:28.530295 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-central-agent" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530312 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-central-agent" Oct 08 13:39:28 crc kubenswrapper[5065]: E1008 13:39:28.530327 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerName="init" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530335 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerName="init" Oct 08 13:39:28 crc kubenswrapper[5065]: E1008 13:39:28.530353 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerName="dnsmasq-dns" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530359 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerName="dnsmasq-dns" Oct 08 13:39:28 crc kubenswrapper[5065]: E1008 13:39:28.530377 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="proxy-httpd" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530382 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="proxy-httpd" Oct 08 13:39:28 crc kubenswrapper[5065]: E1008 13:39:28.530407 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="sg-core" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530428 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="sg-core" Oct 08 13:39:28 crc kubenswrapper[5065]: E1008 13:39:28.530437 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-notification-agent" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530444 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-notification-agent" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530609 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-notification-agent" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530622 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" containerName="dnsmasq-dns" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530632 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="proxy-httpd" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530667 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="ceilometer-central-agent" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.530675 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" containerName="sg-core" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.532310 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.536327 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.536470 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.539760 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.550559 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-nzm7z"] Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.563126 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-nzm7z"] Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.582532 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.641368 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.641497 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.641611 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-config-data\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.641668 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqvmj\" (UniqueName: \"kubernetes.io/projected/73ec06a5-eadd-4545-a157-1aa731eabe13-kube-api-access-bqvmj\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.641773 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-log-httpd\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.641897 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-scripts\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.641974 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.642017 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-run-httpd\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.743737 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-scripts\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.743812 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.743845 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-run-httpd\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.743905 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.743935 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.743969 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-config-data\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.743997 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqvmj\" (UniqueName: \"kubernetes.io/projected/73ec06a5-eadd-4545-a157-1aa731eabe13-kube-api-access-bqvmj\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.744017 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-log-httpd\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.744477 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-run-httpd\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.745133 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-log-httpd\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.750242 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.751657 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.753315 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.754799 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-scripts\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.763941 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-config-data\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.764636 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqvmj\" (UniqueName: \"kubernetes.io/projected/73ec06a5-eadd-4545-a157-1aa731eabe13-kube-api-access-bqvmj\") pod \"ceilometer-0\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.888654 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4829c2a2-c022-4d7a-86eb-38134480c008" path="/var/lib/kubelet/pods/4829c2a2-c022-4d7a-86eb-38134480c008/volumes" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.888807 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:39:28 crc kubenswrapper[5065]: I1008 13:39:28.889575 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6f2d1c-608d-4ad2-9056-80bb29eac06c" path="/var/lib/kubelet/pods/ae6f2d1c-608d-4ad2-9056-80bb29eac06c/volumes" Oct 08 13:39:30 crc kubenswrapper[5065]: I1008 13:39:30.051126 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:39:30 crc kubenswrapper[5065]: W1008 13:39:30.056605 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73ec06a5_eadd_4545_a157_1aa731eabe13.slice/crio-e7653a79d2d3c1219cf05c177f1e8455487bfe3bbca2af0b4ad223d021e927a9 WatchSource:0}: Error finding container e7653a79d2d3c1219cf05c177f1e8455487bfe3bbca2af0b4ad223d021e927a9: Status 404 returned error can't find the container with id e7653a79d2d3c1219cf05c177f1e8455487bfe3bbca2af0b4ad223d021e927a9 Oct 08 13:39:30 crc kubenswrapper[5065]: I1008 13:39:30.496355 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerStarted","Data":"e7653a79d2d3c1219cf05c177f1e8455487bfe3bbca2af0b4ad223d021e927a9"} Oct 08 13:39:31 crc kubenswrapper[5065]: I1008 13:39:31.507115 5065 generic.go:334] "Generic (PLEG): container finished" podID="e0b3a4e4-af9e-4625-af67-2631d39d0a0b" containerID="5f69ab5014e9ea4bb50f1b65d4593a168bb3d6a2c7e305250282dedb882013c5" exitCode=0 Oct 08 13:39:31 crc kubenswrapper[5065]: I1008 13:39:31.507217 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn4n5" event={"ID":"e0b3a4e4-af9e-4625-af67-2631d39d0a0b","Type":"ContainerDied","Data":"5f69ab5014e9ea4bb50f1b65d4593a168bb3d6a2c7e305250282dedb882013c5"} Oct 08 13:39:31 crc kubenswrapper[5065]: I1008 13:39:31.511092 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerStarted","Data":"cb47b71cac0e504aa65a98ca53c8d8abc53a58802c1dd955a2eb6fd0b50a5da8"} Oct 08 13:39:32 crc kubenswrapper[5065]: I1008 13:39:32.318958 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 13:39:32 crc kubenswrapper[5065]: I1008 13:39:32.521550 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerStarted","Data":"05fb2e1abbe2ed373a22504cfa769a58bda357d67045a5a667156055922438b4"} Oct 08 13:39:32 crc kubenswrapper[5065]: I1008 13:39:32.521595 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerStarted","Data":"b435edf6aa605158448d0a1d1edd6a7ab24f12bb0cc39517202571a1d63578f3"} Oct 08 13:39:32 crc kubenswrapper[5065]: I1008 13:39:32.901792 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.021369 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-scripts\") pod \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.021467 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fcwb\" (UniqueName: \"kubernetes.io/projected/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-kube-api-access-9fcwb\") pod \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.021496 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-config-data\") pod \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.021577 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-combined-ca-bundle\") pod \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\" (UID: \"e0b3a4e4-af9e-4625-af67-2631d39d0a0b\") " Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.026795 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-kube-api-access-9fcwb" (OuterVolumeSpecName: "kube-api-access-9fcwb") pod "e0b3a4e4-af9e-4625-af67-2631d39d0a0b" (UID: "e0b3a4e4-af9e-4625-af67-2631d39d0a0b"). InnerVolumeSpecName "kube-api-access-9fcwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.036369 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-scripts" (OuterVolumeSpecName: "scripts") pod "e0b3a4e4-af9e-4625-af67-2631d39d0a0b" (UID: "e0b3a4e4-af9e-4625-af67-2631d39d0a0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.053287 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-config-data" (OuterVolumeSpecName: "config-data") pod "e0b3a4e4-af9e-4625-af67-2631d39d0a0b" (UID: "e0b3a4e4-af9e-4625-af67-2631d39d0a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.054906 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0b3a4e4-af9e-4625-af67-2631d39d0a0b" (UID: "e0b3a4e4-af9e-4625-af67-2631d39d0a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.166151 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.166190 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fcwb\" (UniqueName: \"kubernetes.io/projected/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-kube-api-access-9fcwb\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.166203 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.166213 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0b3a4e4-af9e-4625-af67-2631d39d0a0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.536356 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn4n5" event={"ID":"e0b3a4e4-af9e-4625-af67-2631d39d0a0b","Type":"ContainerDied","Data":"0523134df25f278a7203920e5f529e6539530b5014c98e8da02b9eb9526d6dd8"} Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.536458 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn4n5" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.536486 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0523134df25f278a7203920e5f529e6539530b5014c98e8da02b9eb9526d6dd8" Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.718034 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.718266 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-log" containerID="cri-o://9e836df2e572d16ba92eaaa2a62d68e4736cf3eb3b8d984df8166d8c0a435654" gracePeriod=30 Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.718669 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-api" containerID="cri-o://6137ea6f1340ce791d5379bb5dbaff8b9d90119ca34fd5227f1f557cc453f697" gracePeriod=30 Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.734005 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.734544 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3629fb73-6155-44ec-b119-4da654503863" containerName="nova-scheduler-scheduler" containerID="cri-o://41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" gracePeriod=30 Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.750822 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.751120 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-log" containerID="cri-o://15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d" gracePeriod=30 Oct 08 13:39:33 crc kubenswrapper[5065]: I1008 13:39:33.751229 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-metadata" containerID="cri-o://a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958" gracePeriod=30 Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.550009 5065 generic.go:334] "Generic (PLEG): container finished" podID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerID="6137ea6f1340ce791d5379bb5dbaff8b9d90119ca34fd5227f1f557cc453f697" exitCode=0 Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.550394 5065 generic.go:334] "Generic (PLEG): container finished" podID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerID="9e836df2e572d16ba92eaaa2a62d68e4736cf3eb3b8d984df8166d8c0a435654" exitCode=143 Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.550472 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a5449e-5fed-41e2-9242-0a815e34eb73","Type":"ContainerDied","Data":"6137ea6f1340ce791d5379bb5dbaff8b9d90119ca34fd5227f1f557cc453f697"} Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.550731 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a5449e-5fed-41e2-9242-0a815e34eb73","Type":"ContainerDied","Data":"9e836df2e572d16ba92eaaa2a62d68e4736cf3eb3b8d984df8166d8c0a435654"} Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.556281 5065 generic.go:334] "Generic (PLEG): container finished" podID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerID="15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d" exitCode=143 Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.556366 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f2d114f-59e8-480c-a2ca-87d07079b460","Type":"ContainerDied","Data":"15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d"} Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.563151 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerStarted","Data":"5064c6efe7bd2fd17eb7c7a569db4fa8aa930bcc4f32d0c03a00b49f045cd8eb"} Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.564964 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.590359 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.863494497 podStartE2EDuration="6.590339254s" podCreationTimestamp="2025-10-08 13:39:28 +0000 UTC" firstStartedPulling="2025-10-08 13:39:30.058708312 +0000 UTC m=+1271.836090079" lastFinishedPulling="2025-10-08 13:39:33.785553079 +0000 UTC m=+1275.562934836" observedRunningTime="2025-10-08 13:39:34.580869197 +0000 UTC m=+1276.358250974" watchObservedRunningTime="2025-10-08 13:39:34.590339254 +0000 UTC m=+1276.367721011" Oct 08 13:39:34 crc kubenswrapper[5065]: E1008 13:39:34.774512 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:39:34 crc kubenswrapper[5065]: E1008 13:39:34.776261 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:39:34 crc kubenswrapper[5065]: E1008 13:39:34.778812 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:39:34 crc kubenswrapper[5065]: E1008 13:39:34.778851 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3629fb73-6155-44ec-b119-4da654503863" containerName="nova-scheduler-scheduler" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.818923 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.910983 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-internal-tls-certs\") pod \"09a5449e-5fed-41e2-9242-0a815e34eb73\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.911033 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-public-tls-certs\") pod \"09a5449e-5fed-41e2-9242-0a815e34eb73\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.911056 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-combined-ca-bundle\") pod \"09a5449e-5fed-41e2-9242-0a815e34eb73\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.911212 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a5449e-5fed-41e2-9242-0a815e34eb73-logs\") pod \"09a5449e-5fed-41e2-9242-0a815e34eb73\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.911287 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jz6w\" (UniqueName: \"kubernetes.io/projected/09a5449e-5fed-41e2-9242-0a815e34eb73-kube-api-access-9jz6w\") pod \"09a5449e-5fed-41e2-9242-0a815e34eb73\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.911306 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-config-data\") pod \"09a5449e-5fed-41e2-9242-0a815e34eb73\" (UID: \"09a5449e-5fed-41e2-9242-0a815e34eb73\") " Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.911817 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a5449e-5fed-41e2-9242-0a815e34eb73-logs" (OuterVolumeSpecName: "logs") pod "09a5449e-5fed-41e2-9242-0a815e34eb73" (UID: "09a5449e-5fed-41e2-9242-0a815e34eb73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.922756 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a5449e-5fed-41e2-9242-0a815e34eb73-kube-api-access-9jz6w" (OuterVolumeSpecName: "kube-api-access-9jz6w") pod "09a5449e-5fed-41e2-9242-0a815e34eb73" (UID: "09a5449e-5fed-41e2-9242-0a815e34eb73"). InnerVolumeSpecName "kube-api-access-9jz6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.944598 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-config-data" (OuterVolumeSpecName: "config-data") pod "09a5449e-5fed-41e2-9242-0a815e34eb73" (UID: "09a5449e-5fed-41e2-9242-0a815e34eb73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.944625 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09a5449e-5fed-41e2-9242-0a815e34eb73" (UID: "09a5449e-5fed-41e2-9242-0a815e34eb73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.970084 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "09a5449e-5fed-41e2-9242-0a815e34eb73" (UID: "09a5449e-5fed-41e2-9242-0a815e34eb73"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:34 crc kubenswrapper[5065]: I1008 13:39:34.980641 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "09a5449e-5fed-41e2-9242-0a815e34eb73" (UID: "09a5449e-5fed-41e2-9242-0a815e34eb73"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.013608 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09a5449e-5fed-41e2-9242-0a815e34eb73-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.014015 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jz6w\" (UniqueName: \"kubernetes.io/projected/09a5449e-5fed-41e2-9242-0a815e34eb73-kube-api-access-9jz6w\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.014030 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.014042 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.014054 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.014065 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a5449e-5fed-41e2-9242-0a815e34eb73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.580068 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.584477 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09a5449e-5fed-41e2-9242-0a815e34eb73","Type":"ContainerDied","Data":"d2c6a906f3f453e18bef8087757e04403f19e53bd0ee85641a157e40b715f2ea"} Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.584520 5065 scope.go:117] "RemoveContainer" containerID="6137ea6f1340ce791d5379bb5dbaff8b9d90119ca34fd5227f1f557cc453f697" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.624601 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.625636 5065 scope.go:117] "RemoveContainer" containerID="9e836df2e572d16ba92eaaa2a62d68e4736cf3eb3b8d984df8166d8c0a435654" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.641853 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.654463 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:35 crc kubenswrapper[5065]: E1008 13:39:35.654988 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b3a4e4-af9e-4625-af67-2631d39d0a0b" containerName="nova-manage" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.655005 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b3a4e4-af9e-4625-af67-2631d39d0a0b" containerName="nova-manage" Oct 08 13:39:35 crc kubenswrapper[5065]: E1008 13:39:35.655036 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-log" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.655044 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-log" Oct 08 13:39:35 crc kubenswrapper[5065]: E1008 13:39:35.655065 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-api" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.655072 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-api" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.655306 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-api" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.655339 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" containerName="nova-api-log" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.655354 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b3a4e4-af9e-4625-af67-2631d39d0a0b" containerName="nova-manage" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.656628 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.660056 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.660316 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.660503 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.662356 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.727855 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-config-data\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.728048 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfwl9\" (UniqueName: \"kubernetes.io/projected/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-kube-api-access-dfwl9\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.728203 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.728345 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-logs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.728534 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.728644 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-public-tls-certs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.830397 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.830451 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-public-tls-certs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.830494 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-config-data\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.830526 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfwl9\" (UniqueName: \"kubernetes.io/projected/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-kube-api-access-dfwl9\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.830568 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.830609 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-logs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.830985 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-logs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.834668 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-public-tls-certs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.835154 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.835265 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-config-data\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.836359 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:35 crc kubenswrapper[5065]: I1008 13:39:35.847918 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfwl9\" (UniqueName: \"kubernetes.io/projected/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-kube-api-access-dfwl9\") pod \"nova-api-0\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " pod="openstack/nova-api-0" Oct 08 13:39:36 crc kubenswrapper[5065]: I1008 13:39:36.033750 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:39:36 crc kubenswrapper[5065]: I1008 13:39:36.525629 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:39:36 crc kubenswrapper[5065]: I1008 13:39:36.597963 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbca12dd-73a9-4533-b424-ebaf0c8cec0c","Type":"ContainerStarted","Data":"047514c2a74a7f3e0fcc53a8746fdb4da8950d50ef46d467841fa27c35066882"} Oct 08 13:39:36 crc kubenswrapper[5065]: I1008 13:39:36.894090 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a5449e-5fed-41e2-9242-0a815e34eb73" path="/var/lib/kubelet/pods/09a5449e-5fed-41e2-9242-0a815e34eb73/volumes" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.375316 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.459540 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-nova-metadata-tls-certs\") pod \"1f2d114f-59e8-480c-a2ca-87d07079b460\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.459959 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-config-data\") pod \"1f2d114f-59e8-480c-a2ca-87d07079b460\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.460124 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-852zs\" (UniqueName: \"kubernetes.io/projected/1f2d114f-59e8-480c-a2ca-87d07079b460-kube-api-access-852zs\") pod \"1f2d114f-59e8-480c-a2ca-87d07079b460\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.460229 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2d114f-59e8-480c-a2ca-87d07079b460-logs\") pod \"1f2d114f-59e8-480c-a2ca-87d07079b460\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.460347 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-combined-ca-bundle\") pod \"1f2d114f-59e8-480c-a2ca-87d07079b460\" (UID: \"1f2d114f-59e8-480c-a2ca-87d07079b460\") " Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.460853 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2d114f-59e8-480c-a2ca-87d07079b460-logs" (OuterVolumeSpecName: "logs") pod "1f2d114f-59e8-480c-a2ca-87d07079b460" (UID: "1f2d114f-59e8-480c-a2ca-87d07079b460"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.461203 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2d114f-59e8-480c-a2ca-87d07079b460-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.465283 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2d114f-59e8-480c-a2ca-87d07079b460-kube-api-access-852zs" (OuterVolumeSpecName: "kube-api-access-852zs") pod "1f2d114f-59e8-480c-a2ca-87d07079b460" (UID: "1f2d114f-59e8-480c-a2ca-87d07079b460"). InnerVolumeSpecName "kube-api-access-852zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.486476 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-config-data" (OuterVolumeSpecName: "config-data") pod "1f2d114f-59e8-480c-a2ca-87d07079b460" (UID: "1f2d114f-59e8-480c-a2ca-87d07079b460"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.497987 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f2d114f-59e8-480c-a2ca-87d07079b460" (UID: "1f2d114f-59e8-480c-a2ca-87d07079b460"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.509443 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1f2d114f-59e8-480c-a2ca-87d07079b460" (UID: "1f2d114f-59e8-480c-a2ca-87d07079b460"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.564110 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-852zs\" (UniqueName: \"kubernetes.io/projected/1f2d114f-59e8-480c-a2ca-87d07079b460-kube-api-access-852zs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.564158 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.564170 5065 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.564184 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d114f-59e8-480c-a2ca-87d07079b460-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.613159 5065 generic.go:334] "Generic (PLEG): container finished" podID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerID="a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958" exitCode=0 Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.613198 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f2d114f-59e8-480c-a2ca-87d07079b460","Type":"ContainerDied","Data":"a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958"} Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.613239 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f2d114f-59e8-480c-a2ca-87d07079b460","Type":"ContainerDied","Data":"2cd86d04123f241feeec881a222f45a35514dfdaccd0b46480f00d319976b930"} Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.613257 5065 scope.go:117] "RemoveContainer" containerID="a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.613254 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.617440 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbca12dd-73a9-4533-b424-ebaf0c8cec0c","Type":"ContainerStarted","Data":"14b3fa78ec0416aa6cf474683bcb75b805d9d36b7d9d4631a80e461d654dfd5a"} Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.617469 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbca12dd-73a9-4533-b424-ebaf0c8cec0c","Type":"ContainerStarted","Data":"e59b55badd92b9747669cea1a276f0f4749ca1574e5171035193093770c5a6ca"} Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.636510 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.636486301 podStartE2EDuration="2.636486301s" podCreationTimestamp="2025-10-08 13:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:37.636158461 +0000 UTC m=+1279.413540218" watchObservedRunningTime="2025-10-08 13:39:37.636486301 +0000 UTC m=+1279.413868058" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.650585 5065 scope.go:117] "RemoveContainer" containerID="15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.671580 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.686589 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.690576 5065 scope.go:117] "RemoveContainer" containerID="a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958" Oct 08 13:39:37 crc kubenswrapper[5065]: E1008 13:39:37.691140 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958\": container with ID starting with a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958 not found: ID does not exist" containerID="a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.691248 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958"} err="failed to get container status \"a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958\": rpc error: code = NotFound desc = could not find container \"a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958\": container with ID starting with a80e771598f49d727a2569af3f2fddcd88200f27264eacf73589069231740958 not found: ID does not exist" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.691343 5065 scope.go:117] "RemoveContainer" containerID="15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d" Oct 08 13:39:37 crc kubenswrapper[5065]: E1008 13:39:37.691724 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d\": container with ID starting with 15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d not found: ID does not exist" containerID="15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.691809 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d"} err="failed to get container status \"15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d\": rpc error: code = NotFound desc = could not find container \"15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d\": container with ID starting with 15622e750bb89716e3031a03196f3e5a6968e579900c7ea5194f35a5923d373d not found: ID does not exist" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.697893 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:37 crc kubenswrapper[5065]: E1008 13:39:37.698612 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-metadata" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.698636 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-metadata" Oct 08 13:39:37 crc kubenswrapper[5065]: E1008 13:39:37.698651 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-log" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.698659 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-log" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.698948 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-log" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.698971 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" containerName="nova-metadata-metadata" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.700319 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.705841 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.706112 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.707924 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.768835 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.768890 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.768998 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbd2\" (UniqueName: \"kubernetes.io/projected/493c63a1-0210-4a70-a964-79522491fd05-kube-api-access-xhbd2\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.769236 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-config-data\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.769520 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/493c63a1-0210-4a70-a964-79522491fd05-logs\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.870329 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhbd2\" (UniqueName: \"kubernetes.io/projected/493c63a1-0210-4a70-a964-79522491fd05-kube-api-access-xhbd2\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.870406 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-config-data\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.870471 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/493c63a1-0210-4a70-a964-79522491fd05-logs\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.870497 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.870520 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.871071 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/493c63a1-0210-4a70-a964-79522491fd05-logs\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.874354 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.874471 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-config-data\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.875157 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:37 crc kubenswrapper[5065]: I1008 13:39:37.885819 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhbd2\" (UniqueName: \"kubernetes.io/projected/493c63a1-0210-4a70-a964-79522491fd05-kube-api-access-xhbd2\") pod \"nova-metadata-0\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " pod="openstack/nova-metadata-0" Oct 08 13:39:38 crc kubenswrapper[5065]: I1008 13:39:38.017157 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:39:38 crc kubenswrapper[5065]: I1008 13:39:38.511217 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:39:38 crc kubenswrapper[5065]: I1008 13:39:38.630475 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"493c63a1-0210-4a70-a964-79522491fd05","Type":"ContainerStarted","Data":"1d36fcd053ebdd5c48504d2380f816594b9018d1995c86edd1baa22df7cb73f3"} Oct 08 13:39:38 crc kubenswrapper[5065]: I1008 13:39:38.889915 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2d114f-59e8-480c-a2ca-87d07079b460" path="/var/lib/kubelet/pods/1f2d114f-59e8-480c-a2ca-87d07079b460/volumes" Oct 08 13:39:38 crc kubenswrapper[5065]: I1008 13:39:38.950133 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.005577 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-config-data\") pod \"3629fb73-6155-44ec-b119-4da654503863\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.005761 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-combined-ca-bundle\") pod \"3629fb73-6155-44ec-b119-4da654503863\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.008423 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxmg\" (UniqueName: \"kubernetes.io/projected/3629fb73-6155-44ec-b119-4da654503863-kube-api-access-rxxmg\") pod \"3629fb73-6155-44ec-b119-4da654503863\" (UID: \"3629fb73-6155-44ec-b119-4da654503863\") " Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.014801 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3629fb73-6155-44ec-b119-4da654503863-kube-api-access-rxxmg" (OuterVolumeSpecName: "kube-api-access-rxxmg") pod "3629fb73-6155-44ec-b119-4da654503863" (UID: "3629fb73-6155-44ec-b119-4da654503863"). InnerVolumeSpecName "kube-api-access-rxxmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.048044 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-config-data" (OuterVolumeSpecName: "config-data") pod "3629fb73-6155-44ec-b119-4da654503863" (UID: "3629fb73-6155-44ec-b119-4da654503863"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.052958 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3629fb73-6155-44ec-b119-4da654503863" (UID: "3629fb73-6155-44ec-b119-4da654503863"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.113090 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.113124 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3629fb73-6155-44ec-b119-4da654503863-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.113135 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxxmg\" (UniqueName: \"kubernetes.io/projected/3629fb73-6155-44ec-b119-4da654503863-kube-api-access-rxxmg\") on node \"crc\" DevicePath \"\"" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.646864 5065 generic.go:334] "Generic (PLEG): container finished" podID="3629fb73-6155-44ec-b119-4da654503863" containerID="41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" exitCode=0 Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.646992 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.646976 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3629fb73-6155-44ec-b119-4da654503863","Type":"ContainerDied","Data":"41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45"} Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.647496 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3629fb73-6155-44ec-b119-4da654503863","Type":"ContainerDied","Data":"02f23b1c88f36f5c1a5f7589005d2f3fccddeec409fa599685c3200b2dcdf3b6"} Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.647529 5065 scope.go:117] "RemoveContainer" containerID="41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.650145 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"493c63a1-0210-4a70-a964-79522491fd05","Type":"ContainerStarted","Data":"4c694dd4a31cd2734ca3f755d48b6352d8867ca981e402024f204138ea07c425"} Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.651627 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"493c63a1-0210-4a70-a964-79522491fd05","Type":"ContainerStarted","Data":"f00de157ca335b201eedfe40fcdec9a8b8d879af63720162be6c83a57024286e"} Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.685100 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.684955461 podStartE2EDuration="2.684955461s" podCreationTimestamp="2025-10-08 13:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:39.675135224 +0000 UTC m=+1281.452516971" watchObservedRunningTime="2025-10-08 13:39:39.684955461 +0000 UTC m=+1281.462337228" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.685830 5065 scope.go:117] "RemoveContainer" containerID="41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" Oct 08 13:39:39 crc kubenswrapper[5065]: E1008 13:39:39.687839 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45\": container with ID starting with 41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45 not found: ID does not exist" containerID="41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.687886 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45"} err="failed to get container status \"41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45\": rpc error: code = NotFound desc = could not find container \"41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45\": container with ID starting with 41b0d27ee10a41a972b07be0e21d2be2b8d8892a76237d3a8bd3eee3ad8dbb45 not found: ID does not exist" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.696331 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.707249 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.714634 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:39:39 crc kubenswrapper[5065]: E1008 13:39:39.714989 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3629fb73-6155-44ec-b119-4da654503863" containerName="nova-scheduler-scheduler" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.715005 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3629fb73-6155-44ec-b119-4da654503863" containerName="nova-scheduler-scheduler" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.715203 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3629fb73-6155-44ec-b119-4da654503863" containerName="nova-scheduler-scheduler" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.715851 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.718305 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.722787 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-config-data\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.722958 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.723118 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqgk\" (UniqueName: \"kubernetes.io/projected/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-kube-api-access-fmqgk\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.733366 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.825671 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqgk\" (UniqueName: \"kubernetes.io/projected/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-kube-api-access-fmqgk\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.825835 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-config-data\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.826063 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.832020 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.835771 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-config-data\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:39 crc kubenswrapper[5065]: I1008 13:39:39.847977 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqgk\" (UniqueName: \"kubernetes.io/projected/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-kube-api-access-fmqgk\") pod \"nova-scheduler-0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " pod="openstack/nova-scheduler-0" Oct 08 13:39:40 crc kubenswrapper[5065]: I1008 13:39:40.032020 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:39:40 crc kubenswrapper[5065]: I1008 13:39:40.496188 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:39:40 crc kubenswrapper[5065]: W1008 13:39:40.502587 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ce5c750_265a_4589_8f5e_a9e6a846d0d0.slice/crio-543b997de8c7d7ba73f27fa4af86fe456d8210888fea508969854db9f409ef9a WatchSource:0}: Error finding container 543b997de8c7d7ba73f27fa4af86fe456d8210888fea508969854db9f409ef9a: Status 404 returned error can't find the container with id 543b997de8c7d7ba73f27fa4af86fe456d8210888fea508969854db9f409ef9a Oct 08 13:39:40 crc kubenswrapper[5065]: I1008 13:39:40.659915 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6ce5c750-265a-4589-8f5e-a9e6a846d0d0","Type":"ContainerStarted","Data":"543b997de8c7d7ba73f27fa4af86fe456d8210888fea508969854db9f409ef9a"} Oct 08 13:39:40 crc kubenswrapper[5065]: I1008 13:39:40.885934 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3629fb73-6155-44ec-b119-4da654503863" path="/var/lib/kubelet/pods/3629fb73-6155-44ec-b119-4da654503863/volumes" Oct 08 13:39:41 crc kubenswrapper[5065]: I1008 13:39:41.673862 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6ce5c750-265a-4589-8f5e-a9e6a846d0d0","Type":"ContainerStarted","Data":"91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b"} Oct 08 13:39:41 crc kubenswrapper[5065]: I1008 13:39:41.704508 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.704480745 podStartE2EDuration="2.704480745s" podCreationTimestamp="2025-10-08 13:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:39:41.699579347 +0000 UTC m=+1283.476961114" watchObservedRunningTime="2025-10-08 13:39:41.704480745 +0000 UTC m=+1283.481862542" Oct 08 13:39:43 crc kubenswrapper[5065]: I1008 13:39:43.017821 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 13:39:43 crc kubenswrapper[5065]: I1008 13:39:43.020866 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 13:39:45 crc kubenswrapper[5065]: I1008 13:39:45.032675 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 13:39:46 crc kubenswrapper[5065]: I1008 13:39:46.034282 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 13:39:46 crc kubenswrapper[5065]: I1008 13:39:46.034352 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 13:39:47 crc kubenswrapper[5065]: I1008 13:39:47.053674 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:47 crc kubenswrapper[5065]: I1008 13:39:47.053700 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:48 crc kubenswrapper[5065]: I1008 13:39:48.017777 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 13:39:48 crc kubenswrapper[5065]: I1008 13:39:48.017842 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 13:39:49 crc kubenswrapper[5065]: I1008 13:39:49.023615 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:49 crc kubenswrapper[5065]: I1008 13:39:49.029709 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 13:39:50 crc kubenswrapper[5065]: I1008 13:39:50.033229 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 13:39:50 crc kubenswrapper[5065]: I1008 13:39:50.065951 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 13:39:50 crc kubenswrapper[5065]: I1008 13:39:50.820507 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.375102 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.375502 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.375558 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.376640 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5fd12d0a8c18886d62fe0f77c00a82717c3aaf19bdc8e84b083c3e64ad847f5b"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.376774 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://5fd12d0a8c18886d62fe0f77c00a82717c3aaf19bdc8e84b083c3e64ad847f5b" gracePeriod=600 Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.830192 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="5fd12d0a8c18886d62fe0f77c00a82717c3aaf19bdc8e84b083c3e64ad847f5b" exitCode=0 Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.830368 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"5fd12d0a8c18886d62fe0f77c00a82717c3aaf19bdc8e84b083c3e64ad847f5b"} Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.830571 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c"} Oct 08 13:39:54 crc kubenswrapper[5065]: I1008 13:39:54.830601 5065 scope.go:117] "RemoveContainer" containerID="31f1099402b40e4377d6225bd79cd57be8759f2926970d8fbf7335327beefc81" Oct 08 13:39:56 crc kubenswrapper[5065]: I1008 13:39:56.042037 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 13:39:56 crc kubenswrapper[5065]: I1008 13:39:56.042630 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 13:39:56 crc kubenswrapper[5065]: I1008 13:39:56.043763 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 13:39:56 crc kubenswrapper[5065]: I1008 13:39:56.043873 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 13:39:56 crc kubenswrapper[5065]: I1008 13:39:56.050032 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 13:39:56 crc kubenswrapper[5065]: I1008 13:39:56.054140 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 13:39:58 crc kubenswrapper[5065]: I1008 13:39:58.024269 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 13:39:58 crc kubenswrapper[5065]: I1008 13:39:58.025867 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 13:39:58 crc kubenswrapper[5065]: I1008 13:39:58.031390 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 13:39:58 crc kubenswrapper[5065]: I1008 13:39:58.904380 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 13:39:58 crc kubenswrapper[5065]: I1008 13:39:58.906235 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.021504 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.022273 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2136579b-6d89-48ff-b960-a0401eb9af4c" containerName="openstackclient" containerID="cri-o://9464772db83ebfdeae74f66036b1b7d4c94daed370654f9e39056ee48ce23e40" gracePeriod=2 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.042722 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.120915 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.121197 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="cinder-scheduler" containerID="cri-o://031335faa75843deb180f0a407142ad9a9127dae436f5b2f0f90352f75ef55b1" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.121572 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="probe" containerID="cri-o://2f9df638248c0e2761238e4dd3780cc9ae247b9e55d3548a3bb207837154cf62" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.183324 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-l67ps"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.183636 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-l67ps" podUID="4eba221c-653d-434a-a486-16be41c4a5c4" containerName="openstack-network-exporter" containerID="cri-o://18cb61ad73086df94f6a3e8295ee1d22474fda18c12c345421b646c270cb0232" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.207809 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xnw9m"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.242494 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placementfb78-account-delete-xfxsd"] Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.243049 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2136579b-6d89-48ff-b960-a0401eb9af4c" containerName="openstackclient" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.243069 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="2136579b-6d89-48ff-b960-a0401eb9af4c" containerName="openstackclient" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.243332 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="2136579b-6d89-48ff-b960-a0401eb9af4c" containerName="openstackclient" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.244178 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementfb78-account-delete-xfxsd" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.269069 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.269269 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api-log" containerID="cri-o://6e6bc70c0d9042c36b539f3256634e4e3089df774e755ee10132f50e60ab06f9" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.269348 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api" containerID="cri-o://1bf0529c90d0ac2dd5a5b0e88e4e632725a93d83c5bbb3394596dcada46534fe" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.313484 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.339488 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementfb78-account-delete-xfxsd"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.356754 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-f9wxn"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.376653 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2w6\" (UniqueName: \"kubernetes.io/projected/38fe9b6a-9cdf-4585-a585-474172306dd9-kube-api-access-lj2w6\") pod \"placementfb78-account-delete-xfxsd\" (UID: \"38fe9b6a-9cdf-4585-a585-474172306dd9\") " pod="openstack/placementfb78-account-delete-xfxsd" Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.378238 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.378283 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data podName:a416f725-cd7c-4bd8-9123-28cad18157d9 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:20.878269491 +0000 UTC m=+1322.655651248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data") pod "rabbitmq-cell1-server-0" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9") : configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.420094 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.420315 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="ovn-northd" containerID="cri-o://cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.420448 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="openstack-network-exporter" containerID="cri-o://2c302eb0bc6fc03213ec7ffaa2e249422a78eeddefdda92efac176790ead6fa9" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.481875 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj2w6\" (UniqueName: \"kubernetes.io/projected/38fe9b6a-9cdf-4585-a585-474172306dd9-kube-api-access-lj2w6\") pod \"placementfb78-account-delete-xfxsd\" (UID: \"38fe9b6a-9cdf-4585-a585-474172306dd9\") " pod="openstack/placementfb78-account-delete-xfxsd" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.487403 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutronca6e-account-delete-g4lj7"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.488605 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronca6e-account-delete-g4lj7" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.540312 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj2w6\" (UniqueName: \"kubernetes.io/projected/38fe9b6a-9cdf-4585-a585-474172306dd9-kube-api-access-lj2w6\") pod \"placementfb78-account-delete-xfxsd\" (UID: \"38fe9b6a-9cdf-4585-a585-474172306dd9\") " pod="openstack/placementfb78-account-delete-xfxsd" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.570680 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutronca6e-account-delete-g4lj7"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.585205 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np2gj\" (UniqueName: \"kubernetes.io/projected/80b97e55-65fa-4e4a-becd-d13dd95bb78a-kube-api-access-np2gj\") pod \"neutronca6e-account-delete-g4lj7\" (UID: \"80b97e55-65fa-4e4a-becd-d13dd95bb78a\") " pod="openstack/neutronca6e-account-delete-g4lj7" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.608979 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hmh6z"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.622327 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementfb78-account-delete-xfxsd" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.633109 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican5412-account-delete-fh88w"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.634314 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5412-account-delete-fh88w" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.653185 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hmh6z"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.668496 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.685306 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican5412-account-delete-fh88w"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.686531 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k62xc\" (UniqueName: \"kubernetes.io/projected/36ed295f-7baa-466e-8a26-6d923a84d1b5-kube-api-access-k62xc\") pod \"barbican5412-account-delete-fh88w\" (UID: \"36ed295f-7baa-466e-8a26-6d923a84d1b5\") " pod="openstack/barbican5412-account-delete-fh88w" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.686724 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np2gj\" (UniqueName: \"kubernetes.io/projected/80b97e55-65fa-4e4a-becd-d13dd95bb78a-kube-api-access-np2gj\") pod \"neutronca6e-account-delete-g4lj7\" (UID: \"80b97e55-65fa-4e4a-becd-d13dd95bb78a\") " pod="openstack/neutronca6e-account-delete-g4lj7" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.703527 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance8439-account-delete-zlfcc"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.704778 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance8439-account-delete-zlfcc" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.721468 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance8439-account-delete-zlfcc"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.740602 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np2gj\" (UniqueName: \"kubernetes.io/projected/80b97e55-65fa-4e4a-becd-d13dd95bb78a-kube-api-access-np2gj\") pod \"neutronca6e-account-delete-g4lj7\" (UID: \"80b97e55-65fa-4e4a-becd-d13dd95bb78a\") " pod="openstack/neutronca6e-account-delete-g4lj7" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.788099 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c28d\" (UniqueName: \"kubernetes.io/projected/f60057cf-9f14-4fbd-b161-e27abcc9c7a5-kube-api-access-6c28d\") pod \"glance8439-account-delete-zlfcc\" (UID: \"f60057cf-9f14-4fbd-b161-e27abcc9c7a5\") " pod="openstack/glance8439-account-delete-zlfcc" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.788200 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k62xc\" (UniqueName: \"kubernetes.io/projected/36ed295f-7baa-466e-8a26-6d923a84d1b5-kube-api-access-k62xc\") pod \"barbican5412-account-delete-fh88w\" (UID: \"36ed295f-7baa-466e-8a26-6d923a84d1b5\") " pod="openstack/barbican5412-account-delete-fh88w" Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.790483 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.790536 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data podName:ae3d89be-0a42-4a3d-914c-3bff67bd37b4 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:21.29052461 +0000 UTC m=+1323.067906367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data") pod "rabbitmq-server-0" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4") : configmap "rabbitmq-config-data" not found Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.799070 5065 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-xnw9m" message=< Oct 08 13:40:20 crc kubenswrapper[5065]: Exiting ovn-controller (1) [ OK ] Oct 08 13:40:20 crc kubenswrapper[5065]: > Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.799115 5065 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-xnw9m" podUID="4749b7e4-3896-474d-84b3-8ddf351a24ac" containerName="ovn-controller" containerID="cri-o://a44b760b0eeef2da2d46263b1c69d9a3f20ef2196ecd4ad96ea02f39ea7d5e50" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.799145 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-xnw9m" podUID="4749b7e4-3896-474d-84b3-8ddf351a24ac" containerName="ovn-controller" containerID="cri-o://a44b760b0eeef2da2d46263b1c69d9a3f20ef2196ecd4ad96ea02f39ea7d5e50" gracePeriod=30 Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.827148 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vb8jx"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.829551 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k62xc\" (UniqueName: \"kubernetes.io/projected/36ed295f-7baa-466e-8a26-6d923a84d1b5-kube-api-access-k62xc\") pod \"barbican5412-account-delete-fh88w\" (UID: \"36ed295f-7baa-466e-8a26-6d923a84d1b5\") " pod="openstack/barbican5412-account-delete-fh88w" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.852254 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vb8jx"] Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.917626 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c28d\" (UniqueName: \"kubernetes.io/projected/f60057cf-9f14-4fbd-b161-e27abcc9c7a5-kube-api-access-6c28d\") pod \"glance8439-account-delete-zlfcc\" (UID: \"f60057cf-9f14-4fbd-b161-e27abcc9c7a5\") " pod="openstack/glance8439-account-delete-zlfcc" Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.918864 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:20 crc kubenswrapper[5065]: E1008 13:40:20.919099 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data podName:a416f725-cd7c-4bd8-9123-28cad18157d9 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:21.919080783 +0000 UTC m=+1323.696462540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data") pod "rabbitmq-cell1-server-0" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9") : configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.944097 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronca6e-account-delete-g4lj7" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.966119 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c28d\" (UniqueName: \"kubernetes.io/projected/f60057cf-9f14-4fbd-b161-e27abcc9c7a5-kube-api-access-6c28d\") pod \"glance8439-account-delete-zlfcc\" (UID: \"f60057cf-9f14-4fbd-b161-e27abcc9c7a5\") " pod="openstack/glance8439-account-delete-zlfcc" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.981758 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5412-account-delete-fh88w" Oct 08 13:40:20 crc kubenswrapper[5065]: I1008 13:40:20.998314 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance8439-account-delete-zlfcc" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.214324 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f7954b-c868-4e42-80d4-b7285d4f20e5" path="/var/lib/kubelet/pods/12f7954b-c868-4e42-80d4-b7285d4f20e5/volumes" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.241567 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323689c5-d75e-44c2-aa45-3728bea780ff" path="/var/lib/kubelet/pods/323689c5-d75e-44c2-aa45-3728bea780ff/volumes" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.242212 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi6610-account-delete-vhpbl"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.263251 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi6610-account-delete-vhpbl"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.263351 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi6610-account-delete-vhpbl" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.267668 5065 generic.go:334] "Generic (PLEG): container finished" podID="4749b7e4-3896-474d-84b3-8ddf351a24ac" containerID="a44b760b0eeef2da2d46263b1c69d9a3f20ef2196ecd4ad96ea02f39ea7d5e50" exitCode=0 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.267755 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xnw9m" event={"ID":"4749b7e4-3896-474d-84b3-8ddf351a24ac","Type":"ContainerDied","Data":"a44b760b0eeef2da2d46263b1c69d9a3f20ef2196ecd4ad96ea02f39ea7d5e50"} Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.285313 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-l67ps_4eba221c-653d-434a-a486-16be41c4a5c4/openstack-network-exporter/0.log" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.285359 5065 generic.go:334] "Generic (PLEG): container finished" podID="4eba221c-653d-434a-a486-16be41c4a5c4" containerID="18cb61ad73086df94f6a3e8295ee1d22474fda18c12c345421b646c270cb0232" exitCode=2 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.285405 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l67ps" event={"ID":"4eba221c-653d-434a-a486-16be41c4a5c4","Type":"ContainerDied","Data":"18cb61ad73086df94f6a3e8295ee1d22474fda18c12c345421b646c270cb0232"} Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.303186 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-d7qm9"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.338560 5065 generic.go:334] "Generic (PLEG): container finished" podID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerID="2c302eb0bc6fc03213ec7ffaa2e249422a78eeddefdda92efac176790ead6fa9" exitCode=2 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.338648 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c2f3965-f057-4b1d-bbc9-7235ac48ed49","Type":"ContainerDied","Data":"2c302eb0bc6fc03213ec7ffaa2e249422a78eeddefdda92efac176790ead6fa9"} Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.342634 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl9mw\" (UniqueName: \"kubernetes.io/projected/b6712e27-2a2f-43a8-8c79-dd7b5090d987-kube-api-access-jl9mw\") pod \"novaapi6610-account-delete-vhpbl\" (UID: \"b6712e27-2a2f-43a8-8c79-dd7b5090d987\") " pod="openstack/novaapi6610-account-delete-vhpbl" Oct 08 13:40:21 crc kubenswrapper[5065]: E1008 13:40:21.342812 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 13:40:21 crc kubenswrapper[5065]: E1008 13:40:21.342861 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data podName:ae3d89be-0a42-4a3d-914c-3bff67bd37b4 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:22.342844752 +0000 UTC m=+1324.120226509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data") pod "rabbitmq-server-0" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4") : configmap "rabbitmq-config-data" not found Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.354023 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-d7qm9"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.371101 5065 generic.go:334] "Generic (PLEG): container finished" podID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerID="6e6bc70c0d9042c36b539f3256634e4e3089df774e755ee10132f50e60ab06f9" exitCode=143 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.371142 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad","Type":"ContainerDied","Data":"6e6bc70c0d9042c36b539f3256634e4e3089df774e755ee10132f50e60ab06f9"} Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.379402 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dtvgx"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.400697 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dtvgx"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.435471 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell0be58-account-delete-r2zk9"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.437111 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0be58-account-delete-r2zk9" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.450705 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl9mw\" (UniqueName: \"kubernetes.io/projected/b6712e27-2a2f-43a8-8c79-dd7b5090d987-kube-api-access-jl9mw\") pod \"novaapi6610-account-delete-vhpbl\" (UID: \"b6712e27-2a2f-43a8-8c79-dd7b5090d987\") " pod="openstack/novaapi6610-account-delete-vhpbl" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.461467 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell13b0c-account-delete-2wjl9"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.462907 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.506777 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0be58-account-delete-r2zk9"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.543764 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell13b0c-account-delete-2wjl9"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.552547 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmmv\" (UniqueName: \"kubernetes.io/projected/1582f178-44ee-4e28-a6d6-1d6a29050b56-kube-api-access-frmmv\") pod \"novacell0be58-account-delete-r2zk9\" (UID: \"1582f178-44ee-4e28-a6d6-1d6a29050b56\") " pod="openstack/novacell0be58-account-delete-r2zk9" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.572279 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl9mw\" (UniqueName: \"kubernetes.io/projected/b6712e27-2a2f-43a8-8c79-dd7b5090d987-kube-api-access-jl9mw\") pod \"novaapi6610-account-delete-vhpbl\" (UID: \"b6712e27-2a2f-43a8-8c79-dd7b5090d987\") " pod="openstack/novaapi6610-account-delete-vhpbl" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.584866 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.585442 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="openstack-network-exporter" containerID="cri-o://87f2173752e5478c6a5d9a8376b7108d4661ca6ba8b7eb3ed95cd8acf5ce88a2" gracePeriod=300 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.620531 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-699db6b76b-fd9ls"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.622339 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-699db6b76b-fd9ls" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-log" containerID="cri-o://bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.622765 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-699db6b76b-fd9ls" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-api" containerID="cri-o://4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.664475 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdhl2\" (UniqueName: \"kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2\") pod \"novacell13b0c-account-delete-2wjl9\" (UID: \"5e48c1a5-da0e-4539-a55c-7c0bbdae2486\") " pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.664662 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frmmv\" (UniqueName: \"kubernetes.io/projected/1582f178-44ee-4e28-a6d6-1d6a29050b56-kube-api-access-frmmv\") pod \"novacell0be58-account-delete-r2zk9\" (UID: \"1582f178-44ee-4e28-a6d6-1d6a29050b56\") " pod="openstack/novacell0be58-account-delete-r2zk9" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.670555 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6dp7h"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.701486 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6dp7h"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.770104 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhl2\" (UniqueName: \"kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2\") pod \"novacell13b0c-account-delete-2wjl9\" (UID: \"5e48c1a5-da0e-4539-a55c-7c0bbdae2486\") " pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:21 crc kubenswrapper[5065]: E1008 13:40:21.775623 5065 projected.go:194] Error preparing data for projected volume kube-api-access-kdhl2 for pod openstack/novacell13b0c-account-delete-2wjl9: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:21 crc kubenswrapper[5065]: E1008 13:40:21.775701 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2 podName:5e48c1a5-da0e-4539-a55c-7c0bbdae2486 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:22.275675862 +0000 UTC m=+1324.053057619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kdhl2" (UniqueName: "kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2") pod "novacell13b0c-account-delete-2wjl9" (UID: "5e48c1a5-da0e-4539-a55c-7c0bbdae2486") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.776140 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frmmv\" (UniqueName: \"kubernetes.io/projected/1582f178-44ee-4e28-a6d6-1d6a29050b56-kube-api-access-frmmv\") pod \"novacell0be58-account-delete-r2zk9\" (UID: \"1582f178-44ee-4e28-a6d6-1d6a29050b56\") " pod="openstack/novacell0be58-account-delete-r2zk9" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.788399 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.789200 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="openstack-network-exporter" containerID="cri-o://fbe9f8ee86cd56018708f58dee327d04bc11e4fe4d541910462237c91b46cc3e" gracePeriod=300 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.813461 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi6610-account-delete-vhpbl" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.820264 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-76vsq"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.820765 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" podUID="18710aa1-a99f-421b-9a4f-694362061773" containerName="dnsmasq-dns" containerID="cri-o://f184d017af97fa8ec5dc9b650dbdc3311a404638bdaa61798fc73a62a8d532a3" gracePeriod=10 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.836653 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jhs7h"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.850804 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jhs7h"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.866741 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" podUID="18710aa1-a99f-421b-9a4f-694362061773" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.885644 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="ovsdbserver-nb" containerID="cri-o://1d09895c6bf96fc0edb25b62fd359eefd1417b0069aceeca025d88f6c6f9d233" gracePeriod=300 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.896144 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.896726 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-server" containerID="cri-o://447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897197 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="swift-recon-cron" containerID="cri-o://ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897262 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="rsync" containerID="cri-o://361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897310 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-expirer" containerID="cri-o://4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897363 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-updater" containerID="cri-o://6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897623 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-auditor" containerID="cri-o://9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897720 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-replicator" containerID="cri-o://7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897918 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-reaper" containerID="cri-o://00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897991 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-updater" containerID="cri-o://bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.897982 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-server" containerID="cri-o://64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.898047 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-auditor" containerID="cri-o://f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.898128 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-auditor" containerID="cri-o://20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.898139 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-server" containerID="cri-o://4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.898133 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-replicator" containerID="cri-o://c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.898207 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-replicator" containerID="cri-o://6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432" gracePeriod=30 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.934728 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn4n5"] Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.938699 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="ovsdbserver-sb" containerID="cri-o://2915271db74b88ab9e16677d8efcfed4d3eb12143a6af327cb8a7a3688f4f413" gracePeriod=300 Oct 08 13:40:21 crc kubenswrapper[5065]: I1008 13:40:21.978440 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn4n5"] Oct 08 13:40:21 crc kubenswrapper[5065]: E1008 13:40:21.998734 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:21 crc kubenswrapper[5065]: E1008 13:40:21.998792 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data podName:a416f725-cd7c-4bd8-9123-28cad18157d9 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:23.998776363 +0000 UTC m=+1325.776158120 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data") pod "rabbitmq-cell1-server-0" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9") : configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.022483 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr9ph"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.050157 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr9ph"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.065958 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f88c4599-sd7mw"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.066253 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f88c4599-sd7mw" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-api" containerID="cri-o://12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.066795 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f88c4599-sd7mw" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-httpd" containerID="cri-o://82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.139127 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.144480 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.144773 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-log" containerID="cri-o://7874ec49d85eba9cfb91c420e3419d8a588bba8034f460f6177db2652682d633" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.152903 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-httpd" containerID="cri-o://7075a38356042d5e366d951a3d1e78c1621a5061aada6f743076696bfacd2c63" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.163290 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.173157 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.173609 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-log" containerID="cri-o://f0213aea8bcbd4774261ea9ed66c3a96af420735c302c67b2721cd71e709425b" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.173850 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-httpd" containerID="cri-o://ed14fb4d41bcd1662b46fbaba5448c7edb1bddf5c128eb75189e64b2a3222c21" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.194063 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.194389 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="ovn-northd" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.261344 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.279980 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-85b95d746c-knffl"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.280245 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-85b95d746c-knffl" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-httpd" containerID="cri-o://3c914d8505b39861ccdfff5dcc013fed1cc1c91fa6e400a924167a096e07326c" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.280734 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-85b95d746c-knffl" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-server" containerID="cri-o://b8ae852d80aae6acad85780abb16c8854f1fddd5178e43292b6c487839c29fc5" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.315329 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6nrgv"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.316683 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" containerID="cri-o://5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" gracePeriod=29 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.322231 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhl2\" (UniqueName: \"kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2\") pod \"novacell13b0c-account-delete-2wjl9\" (UID: \"5e48c1a5-da0e-4539-a55c-7c0bbdae2486\") " pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.331892 5065 projected.go:194] Error preparing data for projected volume kube-api-access-kdhl2 for pod openstack/novacell13b0c-account-delete-2wjl9: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.331997 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2 podName:5e48c1a5-da0e-4539-a55c-7c0bbdae2486 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:23.331945593 +0000 UTC m=+1325.109327350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kdhl2" (UniqueName: "kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2") pod "novacell13b0c-account-delete-2wjl9" (UID: "5e48c1a5-da0e-4539-a55c-7c0bbdae2486") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.362666 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6nrgv"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.384679 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5412-account-create-2vrxz"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.411680 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerName="rabbitmq" containerID="cri-o://7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a" gracePeriod=604800 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.422095 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ca6e-account-create-r89cv"] Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.427104 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.427240 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data podName:ae3d89be-0a42-4a3d-914c-3bff67bd37b4 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:24.427219422 +0000 UTC m=+1326.204601179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data") pod "rabbitmq-server-0" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4") : configmap "rabbitmq-config-data" not found Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.436292 5065 generic.go:334] "Generic (PLEG): container finished" podID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerID="7874ec49d85eba9cfb91c420e3419d8a588bba8034f460f6177db2652682d633" exitCode=143 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.436388 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6e8e72-d895-4018-a176-978d7975d8a6","Type":"ContainerDied","Data":"7874ec49d85eba9cfb91c420e3419d8a588bba8034f460f6177db2652682d633"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.443442 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xnw9m" event={"ID":"4749b7e4-3896-474d-84b3-8ddf351a24ac","Type":"ContainerDied","Data":"e7faa00fac253da8b23fd9286508450d6474c4f49878ad51e850818dedbbc485"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.443493 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7faa00fac253da8b23fd9286508450d6474c4f49878ad51e850818dedbbc485" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.456156 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5412-account-create-2vrxz"] Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.458513 5065 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Oct 08 13:40:22 crc kubenswrapper[5065]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 08 13:40:22 crc kubenswrapper[5065]: + source /usr/local/bin/container-scripts/functions Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNBridge=br-int Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNRemote=tcp:localhost:6642 Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNEncapType=geneve Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNAvailabilityZones= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ EnableChassisAsGateway=true Oct 08 13:40:22 crc kubenswrapper[5065]: ++ PhysicalNetworks= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNHostName= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 08 13:40:22 crc kubenswrapper[5065]: ++ ovs_dir=/var/lib/openvswitch Oct 08 13:40:22 crc kubenswrapper[5065]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 08 13:40:22 crc kubenswrapper[5065]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 08 13:40:22 crc kubenswrapper[5065]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + cleanup_ovsdb_server_semaphore Oct 08 13:40:22 crc kubenswrapper[5065]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 13:40:22 crc kubenswrapper[5065]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 08 13:40:22 crc kubenswrapper[5065]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-f9wxn" message=< Oct 08 13:40:22 crc kubenswrapper[5065]: Exiting ovsdb-server (5) [ OK ] Oct 08 13:40:22 crc kubenswrapper[5065]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 08 13:40:22 crc kubenswrapper[5065]: + source /usr/local/bin/container-scripts/functions Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNBridge=br-int Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNRemote=tcp:localhost:6642 Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNEncapType=geneve Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNAvailabilityZones= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ EnableChassisAsGateway=true Oct 08 13:40:22 crc kubenswrapper[5065]: ++ PhysicalNetworks= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNHostName= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 08 13:40:22 crc kubenswrapper[5065]: ++ ovs_dir=/var/lib/openvswitch Oct 08 13:40:22 crc kubenswrapper[5065]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 08 13:40:22 crc kubenswrapper[5065]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 08 13:40:22 crc kubenswrapper[5065]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + cleanup_ovsdb_server_semaphore Oct 08 13:40:22 crc kubenswrapper[5065]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 13:40:22 crc kubenswrapper[5065]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 08 13:40:22 crc kubenswrapper[5065]: > Oct 08 13:40:22 crc kubenswrapper[5065]: E1008 13:40:22.458667 5065 kuberuntime_container.go:691] "PreStop hook failed" err=< Oct 08 13:40:22 crc kubenswrapper[5065]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 08 13:40:22 crc kubenswrapper[5065]: + source /usr/local/bin/container-scripts/functions Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNBridge=br-int Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNRemote=tcp:localhost:6642 Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNEncapType=geneve Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNAvailabilityZones= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ EnableChassisAsGateway=true Oct 08 13:40:22 crc kubenswrapper[5065]: ++ PhysicalNetworks= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ OVNHostName= Oct 08 13:40:22 crc kubenswrapper[5065]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 08 13:40:22 crc kubenswrapper[5065]: ++ ovs_dir=/var/lib/openvswitch Oct 08 13:40:22 crc kubenswrapper[5065]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 08 13:40:22 crc kubenswrapper[5065]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 08 13:40:22 crc kubenswrapper[5065]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + sleep 0.5 Oct 08 13:40:22 crc kubenswrapper[5065]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 13:40:22 crc kubenswrapper[5065]: + cleanup_ovsdb_server_semaphore Oct 08 13:40:22 crc kubenswrapper[5065]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 13:40:22 crc kubenswrapper[5065]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 08 13:40:22 crc kubenswrapper[5065]: > pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" containerID="cri-o://a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.459928 5065 generic.go:334] "Generic (PLEG): container finished" podID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerID="82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.459989 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f88c4599-sd7mw" event={"ID":"fd3f72f8-a569-409f-a590-02a0f7fcdc81","Type":"ContainerDied","Data":"82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.463613 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" containerID="cri-o://a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" gracePeriod=28 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.486228 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b215a42c-d422-4db9-a83e-df79f7bff9e6/ovsdbserver-nb/0.log" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.488039 5065 generic.go:334] "Generic (PLEG): container finished" podID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerID="87f2173752e5478c6a5d9a8376b7108d4661ca6ba8b7eb3ed95cd8acf5ce88a2" exitCode=2 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.488069 5065 generic.go:334] "Generic (PLEG): container finished" podID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerID="1d09895c6bf96fc0edb25b62fd359eefd1417b0069aceeca025d88f6c6f9d233" exitCode=143 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.488177 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b215a42c-d422-4db9-a83e-df79f7bff9e6","Type":"ContainerDied","Data":"87f2173752e5478c6a5d9a8376b7108d4661ca6ba8b7eb3ed95cd8acf5ce88a2"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.488215 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b215a42c-d422-4db9-a83e-df79f7bff9e6","Type":"ContainerDied","Data":"1d09895c6bf96fc0edb25b62fd359eefd1417b0069aceeca025d88f6c6f9d233"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.515970 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_38fd97a6-e936-4503-a238-97b63e01a7de/ovsdbserver-sb/0.log" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.516258 5065 generic.go:334] "Generic (PLEG): container finished" podID="38fd97a6-e936-4503-a238-97b63e01a7de" containerID="fbe9f8ee86cd56018708f58dee327d04bc11e4fe4d541910462237c91b46cc3e" exitCode=2 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.516339 5065 generic.go:334] "Generic (PLEG): container finished" podID="38fd97a6-e936-4503-a238-97b63e01a7de" containerID="2915271db74b88ab9e16677d8efcfed4d3eb12143a6af327cb8a7a3688f4f413" exitCode=143 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.516599 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"38fd97a6-e936-4503-a238-97b63e01a7de","Type":"ContainerDied","Data":"fbe9f8ee86cd56018708f58dee327d04bc11e4fe4d541910462237c91b46cc3e"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.516706 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"38fd97a6-e936-4503-a238-97b63e01a7de","Type":"ContainerDied","Data":"2915271db74b88ab9e16677d8efcfed4d3eb12143a6af327cb8a7a3688f4f413"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.545107 5065 generic.go:334] "Generic (PLEG): container finished" podID="8cea80f5-d915-459c-9882-4ce114929ab4" containerID="f0213aea8bcbd4774261ea9ed66c3a96af420735c302c67b2721cd71e709425b" exitCode=143 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.545432 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cea80f5-d915-459c-9882-4ce114929ab4","Type":"ContainerDied","Data":"f0213aea8bcbd4774261ea9ed66c3a96af420735c302c67b2721cd71e709425b"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.545538 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tw6n9"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.590764 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591057 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591065 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591072 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591080 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591087 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591094 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591207 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591217 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591223 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591230 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591314 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ca6e-account-create-r89cv"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591343 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591382 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591395 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591403 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591439 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591450 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591459 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591467 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591476 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591484 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.591492 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.599452 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronca6e-account-delete-g4lj7"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.600300 5065 generic.go:334] "Generic (PLEG): container finished" podID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerID="bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea" exitCode=143 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.600396 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699db6b76b-fd9ls" event={"ID":"8c5926be-c223-4cbc-b6e3-a16726aa6c84","Type":"ContainerDied","Data":"bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.613873 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-l67ps_4eba221c-653d-434a-a486-16be41c4a5c4/openstack-network-exporter/0.log" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.614313 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8439-account-create-fc44x"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.614397 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l67ps" event={"ID":"4eba221c-653d-434a-a486-16be41c4a5c4","Type":"ContainerDied","Data":"f267824b75a67ed203718eacd97fef3f37c4807c69adbf8065b78ad02e426938"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.614501 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f267824b75a67ed203718eacd97fef3f37c4807c69adbf8065b78ad02e426938" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.623715 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican5412-account-delete-fh88w"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.624482 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0be58-account-delete-r2zk9" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.634811 5065 generic.go:334] "Generic (PLEG): container finished" podID="2136579b-6d89-48ff-b960-a0401eb9af4c" containerID="9464772db83ebfdeae74f66036b1b7d4c94daed370654f9e39056ee48ce23e40" exitCode=137 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.643857 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-l67ps_4eba221c-653d-434a-a486-16be41c4a5c4/openstack-network-exporter/0.log" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.643923 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.655489 5065 generic.go:334] "Generic (PLEG): container finished" podID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerID="2f9df638248c0e2761238e4dd3780cc9ae247b9e55d3548a3bb207837154cf62" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.655574 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d473a1f-35dc-4b20-a344-19c23f1c8c06","Type":"ContainerDied","Data":"2f9df638248c0e2761238e4dd3780cc9ae247b9e55d3548a3bb207837154cf62"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.655825 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.656580 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tw6n9"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.658000 5065 generic.go:334] "Generic (PLEG): container finished" podID="18710aa1-a99f-421b-9a4f-694362061773" containerID="f184d017af97fa8ec5dc9b650dbdc3311a404638bdaa61798fc73a62a8d532a3" exitCode=0 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.658035 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" event={"ID":"18710aa1-a99f-421b-9a4f-694362061773","Type":"ContainerDied","Data":"f184d017af97fa8ec5dc9b650dbdc3311a404638bdaa61798fc73a62a8d532a3"} Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.674553 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8439-account-create-fc44x"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.684327 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-frtzj"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.704809 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.721240 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-frtzj"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.739645 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance8439-account-delete-zlfcc"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.751505 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e7e9-account-create-8545w"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753001 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brjk8\" (UniqueName: \"kubernetes.io/projected/4eba221c-653d-434a-a486-16be41c4a5c4-kube-api-access-brjk8\") pod \"4eba221c-653d-434a-a486-16be41c4a5c4\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753064 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba221c-653d-434a-a486-16be41c4a5c4-config\") pod \"4eba221c-653d-434a-a486-16be41c4a5c4\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753185 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-combined-ca-bundle\") pod \"4eba221c-653d-434a-a486-16be41c4a5c4\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753293 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovs-rundir\") pod \"4eba221c-653d-434a-a486-16be41c4a5c4\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753668 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovn-rundir\") pod \"4eba221c-653d-434a-a486-16be41c4a5c4\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753744 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-metrics-certs-tls-certs\") pod \"4eba221c-653d-434a-a486-16be41c4a5c4\" (UID: \"4eba221c-653d-434a-a486-16be41c4a5c4\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753895 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "4eba221c-653d-434a-a486-16be41c4a5c4" (UID: "4eba221c-653d-434a-a486-16be41c4a5c4"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.753951 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "4eba221c-653d-434a-a486-16be41c4a5c4" (UID: "4eba221c-653d-434a-a486-16be41c4a5c4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.754294 5065 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovs-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.754314 5065 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4eba221c-653d-434a-a486-16be41c4a5c4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.754466 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eba221c-653d-434a-a486-16be41c4a5c4-config" (OuterVolumeSpecName: "config") pod "4eba221c-653d-434a-a486-16be41c4a5c4" (UID: "4eba221c-653d-434a-a486-16be41c4a5c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.754803 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e7e9-account-create-8545w"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.768499 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xv8c6"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.793878 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xv8c6"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.794671 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eba221c-653d-434a-a486-16be41c4a5c4-kube-api-access-brjk8" (OuterVolumeSpecName: "kube-api-access-brjk8") pod "4eba221c-653d-434a-a486-16be41c4a5c4" (UID: "4eba221c-653d-434a-a486-16be41c4a5c4"). InnerVolumeSpecName "kube-api-access-brjk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.808390 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.808747 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-log" containerID="cri-o://e59b55badd92b9747669cea1a276f0f4749ca1574e5171035193093770c5a6ca" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.809206 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-api" containerID="cri-o://14b3fa78ec0416aa6cf474683bcb75b805d9d36b7d9d4631a80e461d654dfd5a" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.820619 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.826812 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4eba221c-653d-434a-a486-16be41c4a5c4" (UID: "4eba221c-653d-434a-a486-16be41c4a5c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.832109 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.840493 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b79866b6-r6s8q"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.840799 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener" containerID="cri-o://33ceec7bd32526aadb9fdccd627ea9d6843d1eeca2d61fca38b266e771abc801" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.841362 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener-log" containerID="cri-o://e1798572ea57cfc0ec49a70e753277538e330c3d721fda3351da9d563af5e085" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.849662 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5d9966bcdf-t8xzk"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.849949 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker-log" containerID="cri-o://59a606638a8f15bd4f91168bdf8deccda702a925eb4c86f515c45335f2c32e92" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.850114 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker" containerID="cri-o://53a13a8cc7470ca3fcf00e1a1762fd3a353cd97e981efeec2278512dabac8016" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: W1008 13:40:22.854667 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fe9b6a_9cdf_4585_a585_474172306dd9.slice/crio-b50196b1626344fe3206ed82794cb37cad012729508bf197c87d73ae7bb024f9 WatchSource:0}: Error finding container b50196b1626344fe3206ed82794cb37cad012729508bf197c87d73ae7bb024f9: Status 404 returned error can't find the container with id b50196b1626344fe3206ed82794cb37cad012729508bf197c87d73ae7bb024f9 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.857588 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-ovn-controller-tls-certs\") pod \"4749b7e4-3896-474d-84b3-8ddf351a24ac\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.857661 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-combined-ca-bundle\") pod \"4749b7e4-3896-474d-84b3-8ddf351a24ac\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.857720 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvsnv\" (UniqueName: \"kubernetes.io/projected/18710aa1-a99f-421b-9a4f-694362061773-kube-api-access-pvsnv\") pod \"18710aa1-a99f-421b-9a4f-694362061773\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.857893 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run\") pod \"4749b7e4-3896-474d-84b3-8ddf351a24ac\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.857964 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4749b7e4-3896-474d-84b3-8ddf351a24ac-scripts\") pod \"4749b7e4-3896-474d-84b3-8ddf351a24ac\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858009 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-sb\") pod \"18710aa1-a99f-421b-9a4f-694362061773\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858029 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d2m4\" (UniqueName: \"kubernetes.io/projected/4749b7e4-3896-474d-84b3-8ddf351a24ac-kube-api-access-4d2m4\") pod \"4749b7e4-3896-474d-84b3-8ddf351a24ac\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858055 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-config\") pod \"18710aa1-a99f-421b-9a4f-694362061773\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858132 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run-ovn\") pod \"4749b7e4-3896-474d-84b3-8ddf351a24ac\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858154 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-nb\") pod \"18710aa1-a99f-421b-9a4f-694362061773\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858211 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-log-ovn\") pod \"4749b7e4-3896-474d-84b3-8ddf351a24ac\" (UID: \"4749b7e4-3896-474d-84b3-8ddf351a24ac\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858238 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-swift-storage-0\") pod \"18710aa1-a99f-421b-9a4f-694362061773\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858290 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-svc\") pod \"18710aa1-a99f-421b-9a4f-694362061773\" (UID: \"18710aa1-a99f-421b-9a4f-694362061773\") " Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858469 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run" (OuterVolumeSpecName: "var-run") pod "4749b7e4-3896-474d-84b3-8ddf351a24ac" (UID: "4749b7e4-3896-474d-84b3-8ddf351a24ac"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858599 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4749b7e4-3896-474d-84b3-8ddf351a24ac" (UID: "4749b7e4-3896-474d-84b3-8ddf351a24ac"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.858870 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4749b7e4-3896-474d-84b3-8ddf351a24ac" (UID: "4749b7e4-3896-474d-84b3-8ddf351a24ac"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.859804 5065 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.859832 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brjk8\" (UniqueName: \"kubernetes.io/projected/4eba221c-653d-434a-a486-16be41c4a5c4-kube-api-access-brjk8\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.859855 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eba221c-653d-434a-a486-16be41c4a5c4-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.859867 5065 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.859879 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.859890 5065 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4749b7e4-3896-474d-84b3-8ddf351a24ac-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.868464 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5dd9f968c6-s658p"] Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.868734 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5dd9f968c6-s658p" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api-log" containerID="cri-o://282626a68ae112e2a7cb36758619cc926c9f8f75cf3cc744534bbd577bef1de1" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.868875 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5dd9f968c6-s658p" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api" containerID="cri-o://0e1463d6dafc9375c3fe50606be6afdbff0ecc4ff69847c5fb6972ed597e6323" gracePeriod=30 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.870871 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4749b7e4-3896-474d-84b3-8ddf351a24ac-scripts" (OuterVolumeSpecName: "scripts") pod "4749b7e4-3896-474d-84b3-8ddf351a24ac" (UID: "4749b7e4-3896-474d-84b3-8ddf351a24ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.884168 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4749b7e4-3896-474d-84b3-8ddf351a24ac-kube-api-access-4d2m4" (OuterVolumeSpecName: "kube-api-access-4d2m4") pod "4749b7e4-3896-474d-84b3-8ddf351a24ac" (UID: "4749b7e4-3896-474d-84b3-8ddf351a24ac"). InnerVolumeSpecName "kube-api-access-4d2m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.885779 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18710aa1-a99f-421b-9a4f-694362061773-kube-api-access-pvsnv" (OuterVolumeSpecName: "kube-api-access-pvsnv") pod "18710aa1-a99f-421b-9a4f-694362061773" (UID: "18710aa1-a99f-421b-9a4f-694362061773"). InnerVolumeSpecName "kube-api-access-pvsnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.903740 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c21010-ce00-4b6c-8e9d-2e407bb703ed" path="/var/lib/kubelet/pods/47c21010-ce00-4b6c-8e9d-2e407bb703ed/volumes" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.923908 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70fd4e43-69f4-482f-a374-2b8074e6a1d7" path="/var/lib/kubelet/pods/70fd4e43-69f4-482f-a374-2b8074e6a1d7/volumes" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.941278 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77637c1f-26f5-4ea7-9a5c-af70030ca78c" path="/var/lib/kubelet/pods/77637c1f-26f5-4ea7-9a5c-af70030ca78c/volumes" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.942528 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerName="rabbitmq" containerID="cri-o://c184ff5407110302a6125a5a613f8a91d5febe7a6d10d230bf471f3d0f46b2f4" gracePeriod=604800 Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.953737 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79be2020-2dbb-4cb0-bba8-ee4e78a3786a" path="/var/lib/kubelet/pods/79be2020-2dbb-4cb0-bba8-ee4e78a3786a/volumes" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.962869 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df6bcfb-1378-4a89-a8d4-fe66dd35f072" path="/var/lib/kubelet/pods/7df6bcfb-1378-4a89-a8d4-fe66dd35f072/volumes" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.963640 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871946b3-7c0e-4329-a752-46b5a8d5792f" path="/var/lib/kubelet/pods/871946b3-7c0e-4329-a752-46b5a8d5792f/volumes" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.965962 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_38fd97a6-e936-4503-a238-97b63e01a7de/ovsdbserver-sb/0.log" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.966038 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.976357 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d2m4\" (UniqueName: \"kubernetes.io/projected/4749b7e4-3896-474d-84b3-8ddf351a24ac-kube-api-access-4d2m4\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.976400 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvsnv\" (UniqueName: \"kubernetes.io/projected/18710aa1-a99f-421b-9a4f-694362061773-kube-api-access-pvsnv\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.976433 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4749b7e4-3896-474d-84b3-8ddf351a24ac-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.980477 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.983229 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b139de0-decf-49d9-8937-87abc053ee7d" path="/var/lib/kubelet/pods/8b139de0-decf-49d9-8937-87abc053ee7d/volumes" Oct 08 13:40:22 crc kubenswrapper[5065]: I1008 13:40:22.984022 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="915a9641-6839-457e-9309-ef7b5b83bcfa" path="/var/lib/kubelet/pods/915a9641-6839-457e-9309-ef7b5b83bcfa/volumes" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.019947 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "18710aa1-a99f-421b-9a4f-694362061773" (UID: "18710aa1-a99f-421b-9a4f-694362061773"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.036491 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fca4dc-7e0d-4bb2-bf66-7f70f13802ae" path="/var/lib/kubelet/pods/98fca4dc-7e0d-4bb2-bf66-7f70f13802ae/volumes" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.037118 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a86a58b3-276e-4dc0-96ea-b9d75f6e48c4" path="/var/lib/kubelet/pods/a86a58b3-276e-4dc0-96ea-b9d75f6e48c4/volumes" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.037707 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21ff978-d4cd-4a8f-a2b5-85990d9a3517" path="/var/lib/kubelet/pods/b21ff978-d4cd-4a8f-a2b5-85990d9a3517/volumes" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.050368 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4749b7e4-3896-474d-84b3-8ddf351a24ac" (UID: "4749b7e4-3896-474d-84b3-8ddf351a24ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.057720 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a257f6-4b74-429b-9da0-b76051265822" path="/var/lib/kubelet/pods/c5a257f6-4b74-429b-9da0-b76051265822/volumes" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.058132 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18710aa1-a99f-421b-9a4f-694362061773" (UID: "18710aa1-a99f-421b-9a4f-694362061773"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.060093 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b3a4e4-af9e-4625-af67-2631d39d0a0b" path="/var/lib/kubelet/pods/e0b3a4e4-af9e-4625-af67-2631d39d0a0b/volumes" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.060562 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fcfe14-3fc3-4143-ba87-88695b643507" path="/var/lib/kubelet/pods/f3fcfe14-3fc3-4143-ba87-88695b643507/volumes" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077339 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mlzk\" (UniqueName: \"kubernetes.io/projected/38fd97a6-e936-4503-a238-97b63e01a7de-kube-api-access-5mlzk\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077408 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-scripts\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077470 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077546 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config-secret\") pod \"2136579b-6d89-48ff-b960-a0401eb9af4c\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077574 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-metrics-certs-tls-certs\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077619 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdbserver-sb-tls-certs\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077651 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config\") pod \"2136579b-6d89-48ff-b960-a0401eb9af4c\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077747 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jjt8\" (UniqueName: \"kubernetes.io/projected/2136579b-6d89-48ff-b960-a0401eb9af4c-kube-api-access-2jjt8\") pod \"2136579b-6d89-48ff-b960-a0401eb9af4c\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077791 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-config\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077835 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-combined-ca-bundle\") pod \"2136579b-6d89-48ff-b960-a0401eb9af4c\" (UID: \"2136579b-6d89-48ff-b960-a0401eb9af4c\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077880 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdb-rundir\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.077909 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-combined-ca-bundle\") pod \"38fd97a6-e936-4503-a238-97b63e01a7de\" (UID: \"38fd97a6-e936-4503-a238-97b63e01a7de\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.078547 5065 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.078566 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.078580 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.093049 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b215a42c-d422-4db9-a83e-df79f7bff9e6/ovsdbserver-nb/0.log" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.093802 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.106986 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gx42n"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107032 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gx42n"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107049 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3b0c-account-create-k6csx"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107062 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3b0c-account-create-k6csx"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107076 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell13b0c-account-delete-2wjl9"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107094 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107107 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107119 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107132 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107146 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m8vv"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107161 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m8vv"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107173 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ns8wd"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107184 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107198 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ns8wd"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.107224 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementfb78-account-delete-xfxsd"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.108680 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18710aa1-a99f-421b-9a4f-694362061773" (UID: "18710aa1-a99f-421b-9a4f-694362061773"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.110209 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.110372 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="84d28af9-b1bc-4475-abc6-9c33380349e9" containerName="nova-cell0-conductor-conductor" containerID="cri-o://5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41" gracePeriod=30 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.110711 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-log" containerID="cri-o://f00de157ca335b201eedfe40fcdec9a8b8d879af63720162be6c83a57024286e" gracePeriod=30 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.110800 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6ce5c750-265a-4589-8f5e-a9e6a846d0d0" containerName="nova-scheduler-scheduler" containerID="cri-o://91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b" gracePeriod=30 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.110853 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-metadata" containerID="cri-o://4c694dd4a31cd2734ca3f755d48b6352d8867ca981e402024f204138ea07c425" gracePeriod=30 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.110920 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6470de54-fdec-4648-b941-1031c67f55ca" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://51d59f6c1a32d7f9548db66179bc1cdd29b957b043fcdf8c829de549616ab078" gracePeriod=30 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.111031 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="14ab13f6-4348-4848-9149-4d1ee240d1ed" containerName="nova-cell1-conductor-conductor" containerID="cri-o://3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f" gracePeriod=30 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.111240 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-scripts" (OuterVolumeSpecName: "scripts") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.120310 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kdhl2], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/novacell13b0c-account-delete-2wjl9" podUID="5e48c1a5-da0e-4539-a55c-7c0bbdae2486" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.120644 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-config" (OuterVolumeSpecName: "config") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.129110 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fd97a6-e936-4503-a238-97b63e01a7de-kube-api-access-5mlzk" (OuterVolumeSpecName: "kube-api-access-5mlzk") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "kube-api-access-5mlzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.129643 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2136579b-6d89-48ff-b960-a0401eb9af4c-kube-api-access-2jjt8" (OuterVolumeSpecName: "kube-api-access-2jjt8") pod "2136579b-6d89-48ff-b960-a0401eb9af4c" (UID: "2136579b-6d89-48ff-b960-a0401eb9af4c"). InnerVolumeSpecName "kube-api-access-2jjt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.129728 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.156346 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican5412-account-delete-fh88w"] Oct 08 13:40:23 crc kubenswrapper[5065]: W1008 13:40:23.171505 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36ed295f_7baa_466e_8a26_6d923a84d1b5.slice/crio-514f59d9895dc49f96fa16b11f9b3fe76b95bd05f56222aac0f68c052d761558 WatchSource:0}: Error finding container 514f59d9895dc49f96fa16b11f9b3fe76b95bd05f56222aac0f68c052d761558: Status 404 returned error can't find the container with id 514f59d9895dc49f96fa16b11f9b3fe76b95bd05f56222aac0f68c052d761558 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.175082 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2136579b-6d89-48ff-b960-a0401eb9af4c" (UID: "2136579b-6d89-48ff-b960-a0401eb9af4c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182098 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-config\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182148 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182217 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf7xl\" (UniqueName: \"kubernetes.io/projected/b215a42c-d422-4db9-a83e-df79f7bff9e6-kube-api-access-nf7xl\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182362 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-combined-ca-bundle\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182460 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-metrics-certs-tls-certs\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182492 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdb-rundir\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182514 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdbserver-nb-tls-certs\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.182586 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-scripts\") pod \"b215a42c-d422-4db9-a83e-df79f7bff9e6\" (UID: \"b215a42c-d422-4db9-a83e-df79f7bff9e6\") " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183113 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183143 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183156 5065 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183168 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183178 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jjt8\" (UniqueName: \"kubernetes.io/projected/2136579b-6d89-48ff-b960-a0401eb9af4c-kube-api-access-2jjt8\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183189 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fd97a6-e936-4503-a238-97b63e01a7de-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183200 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.183213 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mlzk\" (UniqueName: \"kubernetes.io/projected/38fd97a6-e936-4503-a238-97b63e01a7de-kube-api-access-5mlzk\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.184452 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-config" (OuterVolumeSpecName: "config") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.186925 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-scripts" (OuterVolumeSpecName: "scripts") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.189538 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b215a42c-d422-4db9-a83e-df79f7bff9e6-kube-api-access-nf7xl" (OuterVolumeSpecName: "kube-api-access-nf7xl") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "kube-api-access-nf7xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.198930 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.206616 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.275118 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.282405 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.285467 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.285501 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.285510 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.285519 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.285529 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b215a42c-d422-4db9-a83e-df79f7bff9e6-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.285555 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.285565 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf7xl\" (UniqueName: \"kubernetes.io/projected/b215a42c-d422-4db9-a83e-df79f7bff9e6-kube-api-access-nf7xl\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.312812 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.312829 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerName="galera" containerID="cri-o://9c1b2cb3bba97162839d7321cc87203aba920e07e4f99928885d1e4d4f6be3cf" gracePeriod=30 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.340054 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18710aa1-a99f-421b-9a4f-694362061773" (UID: "18710aa1-a99f-421b-9a4f-694362061773"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.352957 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.387272 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhl2\" (UniqueName: \"kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2\") pod \"novacell13b0c-account-delete-2wjl9\" (UID: \"5e48c1a5-da0e-4539-a55c-7c0bbdae2486\") " pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.387570 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.387589 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.391210 5065 projected.go:194] Error preparing data for projected volume kube-api-access-kdhl2 for pod openstack/novacell13b0c-account-delete-2wjl9: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.391297 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2 podName:5e48c1a5-da0e-4539-a55c-7c0bbdae2486 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:25.391277066 +0000 UTC m=+1327.168658823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kdhl2" (UniqueName: "kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2") pod "novacell13b0c-account-delete-2wjl9" (UID: "5e48c1a5-da0e-4539-a55c-7c0bbdae2486") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.460347 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "4eba221c-653d-434a-a486-16be41c4a5c4" (UID: "4eba221c-653d-434a-a486-16be41c4a5c4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.479473 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2136579b-6d89-48ff-b960-a0401eb9af4c" (UID: "2136579b-6d89-48ff-b960-a0401eb9af4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.488463 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.489706 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eba221c-653d-434a-a486-16be41c4a5c4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.489724 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.489733 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.513094 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "4749b7e4-3896-474d-84b3-8ddf351a24ac" (UID: "4749b7e4-3896-474d-84b3-8ddf351a24ac"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.548502 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.552614 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2136579b-6d89-48ff-b960-a0401eb9af4c" (UID: "2136579b-6d89-48ff-b960-a0401eb9af4c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.555601 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.569699 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-config" (OuterVolumeSpecName: "config") pod "18710aa1-a99f-421b-9a4f-694362061773" (UID: "18710aa1-a99f-421b-9a4f-694362061773"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.593370 5065 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4749b7e4-3896-474d-84b3-8ddf351a24ac-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.593611 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.593641 5065 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2136579b-6d89-48ff-b960-a0401eb9af4c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.593656 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18710aa1-a99f-421b-9a4f-694362061773-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.593669 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.599758 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.648083 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi6610-account-delete-vhpbl"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.652464 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "38fd97a6-e936-4503-a238-97b63e01a7de" (UID: "38fd97a6-e936-4503-a238-97b63e01a7de"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.694702 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "b215a42c-d422-4db9-a83e-df79f7bff9e6" (UID: "b215a42c-d422-4db9-a83e-df79f7bff9e6"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.695894 5065 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b215a42c-d422-4db9-a83e-df79f7bff9e6-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.696056 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38fd97a6-e936-4503-a238-97b63e01a7de-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.701993 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronca6e-account-delete-g4lj7"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.704806 5065 generic.go:334] "Generic (PLEG): container finished" podID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerID="e59b55badd92b9747669cea1a276f0f4749ca1574e5171035193093770c5a6ca" exitCode=143 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.704881 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbca12dd-73a9-4533-b424-ebaf0c8cec0c","Type":"ContainerDied","Data":"e59b55badd92b9747669cea1a276f0f4749ca1574e5171035193093770c5a6ca"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.706169 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance8439-account-delete-zlfcc"] Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.707161 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5412-account-delete-fh88w" event={"ID":"36ed295f-7baa-466e-8a26-6d923a84d1b5","Type":"ContainerStarted","Data":"514f59d9895dc49f96fa16b11f9b3fe76b95bd05f56222aac0f68c052d761558"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.716339 5065 generic.go:334] "Generic (PLEG): container finished" podID="38fe9b6a-9cdf-4585-a585-474172306dd9" containerID="2758344b7c6842d1a78126edc64da4687c969ec597d11812d34e372c73948d04" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.716395 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementfb78-account-delete-xfxsd" event={"ID":"38fe9b6a-9cdf-4585-a585-474172306dd9","Type":"ContainerDied","Data":"2758344b7c6842d1a78126edc64da4687c969ec597d11812d34e372c73948d04"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.716434 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementfb78-account-delete-xfxsd" event={"ID":"38fe9b6a-9cdf-4585-a585-474172306dd9","Type":"ContainerStarted","Data":"b50196b1626344fe3206ed82794cb37cad012729508bf197c87d73ae7bb024f9"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.720335 5065 generic.go:334] "Generic (PLEG): container finished" podID="caf670f8-9cf6-4200-8036-05e9798cad78" containerID="b8ae852d80aae6acad85780abb16c8854f1fddd5178e43292b6c487839c29fc5" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.720360 5065 generic.go:334] "Generic (PLEG): container finished" podID="caf670f8-9cf6-4200-8036-05e9798cad78" containerID="3c914d8505b39861ccdfff5dcc013fed1cc1c91fa6e400a924167a096e07326c" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.720397 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b95d746c-knffl" event={"ID":"caf670f8-9cf6-4200-8036-05e9798cad78","Type":"ContainerDied","Data":"b8ae852d80aae6acad85780abb16c8854f1fddd5178e43292b6c487839c29fc5"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.720445 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b95d746c-knffl" event={"ID":"caf670f8-9cf6-4200-8036-05e9798cad78","Type":"ContainerDied","Data":"3c914d8505b39861ccdfff5dcc013fed1cc1c91fa6e400a924167a096e07326c"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.720462 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b95d746c-knffl" event={"ID":"caf670f8-9cf6-4200-8036-05e9798cad78","Type":"ContainerDied","Data":"6a3261ec5809cdf2cf9598fc1759fd359a19e4343bcd3a70feec10c8c29e1fe4"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.720474 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3261ec5809cdf2cf9598fc1759fd359a19e4343bcd3a70feec10c8c29e1fe4" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.722064 5065 scope.go:117] "RemoveContainer" containerID="9464772db83ebfdeae74f66036b1b7d4c94daed370654f9e39056ee48ce23e40" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.722187 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.746964 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.747076 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.747142 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.747240 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.747324 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.747408 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.749107 5065 generic.go:334] "Generic (PLEG): container finished" podID="f580765e-50e7-42a1-a798-325b80e29e9d" containerID="33ceec7bd32526aadb9fdccd627ea9d6843d1eeca2d61fca38b266e771abc801" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.749232 5065 generic.go:334] "Generic (PLEG): container finished" podID="f580765e-50e7-42a1-a798-325b80e29e9d" containerID="e1798572ea57cfc0ec49a70e753277538e330c3d721fda3351da9d563af5e085" exitCode=143 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.749333 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" event={"ID":"f580765e-50e7-42a1-a798-325b80e29e9d","Type":"ContainerDied","Data":"33ceec7bd32526aadb9fdccd627ea9d6843d1eeca2d61fca38b266e771abc801"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.749429 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" event={"ID":"f580765e-50e7-42a1-a798-325b80e29e9d","Type":"ContainerDied","Data":"e1798572ea57cfc0ec49a70e753277538e330c3d721fda3351da9d563af5e085"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.753642 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_38fd97a6-e936-4503-a238-97b63e01a7de/ovsdbserver-sb/0.log" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.753743 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"38fd97a6-e936-4503-a238-97b63e01a7de","Type":"ContainerDied","Data":"dfdde920204eae35ac2e7b9cdb76c94155c0b6ebbc6c25aec62ddcbd9f298819"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.753830 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.763146 5065 generic.go:334] "Generic (PLEG): container finished" podID="493c63a1-0210-4a70-a964-79522491fd05" containerID="f00de157ca335b201eedfe40fcdec9a8b8d879af63720162be6c83a57024286e" exitCode=143 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.763249 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"493c63a1-0210-4a70-a964-79522491fd05","Type":"ContainerDied","Data":"f00de157ca335b201eedfe40fcdec9a8b8d879af63720162be6c83a57024286e"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.765250 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi6610-account-delete-vhpbl" event={"ID":"b6712e27-2a2f-43a8-8c79-dd7b5090d987","Type":"ContainerStarted","Data":"42c69cdded70cc31b909d2acc262f84d04a4d3b107b46480f992a616d4dafd0a"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.767540 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b215a42c-d422-4db9-a83e-df79f7bff9e6/ovsdbserver-nb/0.log" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.767710 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b215a42c-d422-4db9-a83e-df79f7bff9e6","Type":"ContainerDied","Data":"f33c34dece7eb0326dae40dda26fa89c7b1bf0188c542c1e7483b8e186d2b770"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.768039 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.786756 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" event={"ID":"18710aa1-a99f-421b-9a4f-694362061773","Type":"ContainerDied","Data":"ee43953594f20a81e11b57b199f2d95d7500161cb71820143c358c1599be5885"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.786859 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-76vsq" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.796254 5065 generic.go:334] "Generic (PLEG): container finished" podID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerID="53a13a8cc7470ca3fcf00e1a1762fd3a353cd97e981efeec2278512dabac8016" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.796277 5065 generic.go:334] "Generic (PLEG): container finished" podID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerID="59a606638a8f15bd4f91168bdf8deccda702a925eb4c86f515c45335f2c32e92" exitCode=143 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.796332 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" event={"ID":"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac","Type":"ContainerDied","Data":"53a13a8cc7470ca3fcf00e1a1762fd3a353cd97e981efeec2278512dabac8016"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.796373 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" event={"ID":"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac","Type":"ContainerDied","Data":"59a606638a8f15bd4f91168bdf8deccda702a925eb4c86f515c45335f2c32e92"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.800165 5065 generic.go:334] "Generic (PLEG): container finished" podID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" exitCode=0 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.800214 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerDied","Data":"a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.819685 5065 generic.go:334] "Generic (PLEG): container finished" podID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerID="282626a68ae112e2a7cb36758619cc926c9f8f75cf3cc744534bbd577bef1de1" exitCode=143 Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.819819 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xnw9m" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.819942 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dd9f968c6-s658p" event={"ID":"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7","Type":"ContainerDied","Data":"282626a68ae112e2a7cb36758619cc926c9f8f75cf3cc744534bbd577bef1de1"} Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.820277 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.822959 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l67ps" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.835842 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0be58-account-delete-r2zk9"] Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.878994 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.879356 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.879643 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.879703 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.880456 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.889460 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.892137 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:23 crc kubenswrapper[5065]: E1008 13:40:23.892201 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:40:23 crc kubenswrapper[5065]: I1008 13:40:23.953733 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.165:8776/healthcheck\": read tcp 10.217.0.2:51190->10.217.0.165:8776: read: connection reset by peer" Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.001848 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="6470de54-fdec-4648-b941-1031c67f55ca" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.197:6080/vnc_lite.html\": dial tcp 10.217.0.197:6080: connect: connection refused" Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.003541 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.003637 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data podName:a416f725-cd7c-4bd8-9123-28cad18157d9 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:28.003605116 +0000 UTC m=+1329.780986873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data") pod "rabbitmq-cell1-server-0" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9") : configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.328777 5065 scope.go:117] "RemoveContainer" containerID="fbe9f8ee86cd56018708f58dee327d04bc11e4fe4d541910462237c91b46cc3e" Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.379553 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f is running failed: container process not found" containerID="3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.379845 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f is running failed: container process not found" containerID="3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.380076 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f is running failed: container process not found" containerID="3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.380111 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="14ab13f6-4348-4848-9149-4d1ee240d1ed" containerName="nova-cell1-conductor-conductor" Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.512748 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 13:40:24 crc kubenswrapper[5065]: E1008 13:40:24.512909 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data podName:ae3d89be-0a42-4a3d-914c-3bff67bd37b4 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:28.512861351 +0000 UTC m=+1330.290243108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data") pod "rabbitmq-server-0" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4") : configmap "rabbitmq-config-data" not found Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.710007 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.728849 5065 scope.go:117] "RemoveContainer" containerID="2915271db74b88ab9e16677d8efcfed4d3eb12143a6af327cb8a7a3688f4f413" Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.768347 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.768621 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-central-agent" containerID="cri-o://cb47b71cac0e504aa65a98ca53c8d8abc53a58802c1dd955a2eb6fd0b50a5da8" gracePeriod=30 Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.768981 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="proxy-httpd" containerID="cri-o://5064c6efe7bd2fd17eb7c7a569db4fa8aa930bcc4f32d0c03a00b49f045cd8eb" gracePeriod=30 Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.769026 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="sg-core" containerID="cri-o://05fb2e1abbe2ed373a22504cfa769a58bda357d67045a5a667156055922438b4" gracePeriod=30 Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.769057 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-notification-agent" containerID="cri-o://b435edf6aa605158448d0a1d1edd6a7ab24f12bb0cc39517202571a1d63578f3" gracePeriod=30 Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.790805 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.792655 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cde619b2-b551-4a41-b2f2-c38f1b507a82" containerName="kube-state-metrics" containerID="cri-o://1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e" gracePeriod=30 Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821108 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-combined-ca-bundle\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821466 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-internal-tls-certs\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821491 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-public-tls-certs\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821527 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-log-httpd\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821560 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-etc-swift\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821610 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-config-data\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821641 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-run-httpd\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.821734 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5plxz\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-kube-api-access-5plxz\") pod \"caf670f8-9cf6-4200-8036-05e9798cad78\" (UID: \"caf670f8-9cf6-4200-8036-05e9798cad78\") " Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.823101 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.828746 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.832193 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.881324 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:24 crc kubenswrapper[5065]: I1008 13:40:24.884516 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-kube-api-access-5plxz" (OuterVolumeSpecName: "kube-api-access-5plxz") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "kube-api-access-5plxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:24.916870 5065 generic.go:334] "Generic (PLEG): container finished" podID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerID="9c1b2cb3bba97162839d7321cc87203aba920e07e4f99928885d1e4d4f6be3cf" exitCode=0 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:24.943307 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:24.943345 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5plxz\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-kube-api-access-5plxz\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:24.943358 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caf670f8-9cf6-4200-8036-05e9798cad78-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:24.943370 5065 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/caf670f8-9cf6-4200-8036-05e9798cad78-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:24.951570 5065 generic.go:334] "Generic (PLEG): container finished" podID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerID="1bf0529c90d0ac2dd5a5b0e88e4e632725a93d83c5bbb3394596dcada46534fe" exitCode=0 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.003395 5065 generic.go:334] "Generic (PLEG): container finished" podID="14ab13f6-4348-4848-9149-4d1ee240d1ed" containerID="3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f" exitCode=0 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.019426 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-config-data" (OuterVolumeSpecName: "config-data") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.019494 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.042949 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b is running failed: container process not found" containerID="91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.049093 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b is running failed: container process not found" containerID="91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.050249 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.050273 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.057689 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b is running failed: container process not found" containerID="91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.058111 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6ce5c750-265a-4589-8f5e-a9e6a846d0d0" containerName="nova-scheduler-scheduler" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.066224 5065 generic.go:334] "Generic (PLEG): container finished" podID="6ce5c750-265a-4589-8f5e-a9e6a846d0d0" containerID="91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b" exitCode=0 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.080664 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.083235 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caf670f8-9cf6-4200-8036-05e9798cad78" (UID: "caf670f8-9cf6-4200-8036-05e9798cad78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.085891 5065 generic.go:334] "Generic (PLEG): container finished" podID="6470de54-fdec-4648-b941-1031c67f55ca" containerID="51d59f6c1a32d7f9548db66179bc1cdd29b957b043fcdf8c829de549616ab078" exitCode=0 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.087511 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b95d746c-knffl" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.087732 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.114206 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca7fdfe-89ed-41bb-a9cb-919a501afeb3" path="/var/lib/kubelet/pods/1ca7fdfe-89ed-41bb-a9cb-919a501afeb3/volumes" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.114920 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2136579b-6d89-48ff-b960-a0401eb9af4c" path="/var/lib/kubelet/pods/2136579b-6d89-48ff-b960-a0401eb9af4c/volumes" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.115474 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f77331-9d91-4f35-b1b9-a9c3c68162ca" path="/var/lib/kubelet/pods/63f77331-9d91-4f35-b1b9-a9c3c68162ca/volumes" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.130615 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653cf006-7306-4e5a-b70c-8454b8c47b2d" path="/var/lib/kubelet/pods/653cf006-7306-4e5a-b70c-8454b8c47b2d/volumes" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.139165 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3b42da-fb25-4e64-ae1b-1280eb615d00" path="/var/lib/kubelet/pods/fb3b42da-fb25-4e64-ae1b-1280eb615d00/volumes" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148279 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" event={"ID":"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac","Type":"ContainerDied","Data":"6d978fa991d308dad178f38857a44d6d55145ebc47bb02fa18ad0223b855976f"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148329 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d978fa991d308dad178f38857a44d6d55145ebc47bb02fa18ad0223b855976f" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148343 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148365 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e","Type":"ContainerDied","Data":"9c1b2cb3bba97162839d7321cc87203aba920e07e4f99928885d1e4d4f6be3cf"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148380 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e","Type":"ContainerDied","Data":"50818428c86541f6d1d579a2c1817e0ca794a71a770d9778453461428ab79f29"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148389 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50818428c86541f6d1d579a2c1817e0ca794a71a770d9778453461428ab79f29" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148399 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-76dpp"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148426 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad","Type":"ContainerDied","Data":"1bf0529c90d0ac2dd5a5b0e88e4e632725a93d83c5bbb3394596dcada46534fe"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148440 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fnmr5"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148457 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-76dpp"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148474 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fnmr5"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148489 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-77965b6945-w5rpz"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148503 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148515 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dfd72"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148527 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dfd72"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148539 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad","Type":"ContainerDied","Data":"1ddaf5ed468b9f537cc64dc2d3d0360005419a11d2b346446606cf70d57d78b3"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148551 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ddaf5ed468b9f537cc64dc2d3d0360005419a11d2b346446606cf70d57d78b3" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148563 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4f23-account-create-qdwq2"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148574 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"14ab13f6-4348-4848-9149-4d1ee240d1ed","Type":"ContainerDied","Data":"3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148589 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"14ab13f6-4348-4848-9149-4d1ee240d1ed","Type":"ContainerDied","Data":"736112f627b33774274a94e82f1c964a9798932141b0318ba04044bc75632bef"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148599 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="736112f627b33774274a94e82f1c964a9798932141b0318ba04044bc75632bef" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148608 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0be58-account-delete-r2zk9" event={"ID":"1582f178-44ee-4e28-a6d6-1d6a29050b56","Type":"ContainerStarted","Data":"79cbcb9bd528d9748b0a6cdeb3101a4da236964b45d36cba0123973fd33dcbd5"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148622 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" event={"ID":"f580765e-50e7-42a1-a798-325b80e29e9d","Type":"ContainerDied","Data":"6f88963c538b8156c020550621c70a739f51ae1417b73abe0a7e07241bb9e91b"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148634 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f88963c538b8156c020550621c70a739f51ae1417b73abe0a7e07241bb9e91b" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148644 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6ce5c750-265a-4589-8f5e-a9e6a846d0d0","Type":"ContainerDied","Data":"91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148657 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8439-account-delete-zlfcc" event={"ID":"f60057cf-9f14-4fbd-b161-e27abcc9c7a5","Type":"ContainerStarted","Data":"39e03284e714bfc16de7dd5386b607696858b20426ed3a90b5267b3e10051e33"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148670 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronca6e-account-delete-g4lj7" event={"ID":"80b97e55-65fa-4e4a-becd-d13dd95bb78a","Type":"ContainerStarted","Data":"2b68bf33ef924c8c0c4198eb00fa5f8934707e71959dcd5d9fc3a01bf5cc7447"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148683 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6470de54-fdec-4648-b941-1031c67f55ca","Type":"ContainerDied","Data":"51d59f6c1a32d7f9548db66179bc1cdd29b957b043fcdf8c829de549616ab078"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148696 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6470de54-fdec-4648-b941-1031c67f55ca","Type":"ContainerDied","Data":"47af1d66508e4273edebc4fd31c76944939d16540d68c065850969255415e6df"} Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.148705 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47af1d66508e4273edebc4fd31c76944939d16540d68c065850969255415e6df" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.149687 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-77965b6945-w5rpz" podUID="0643aa92-2649-4c41-b16e-9a05aac93f35" containerName="keystone-api" containerID="cri-o://c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267" gracePeriod=30 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.151019 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="a29eea83-9d60-4101-a351-6f8468a8116c" containerName="memcached" containerID="cri-o://3dd840b2a1968cb45aa4333789027815be04db1faa5fe300d7fbe1813965b970" gracePeriod=30 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.171468 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.171508 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf670f8-9cf6-4200-8036-05e9798cad78-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.207113 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4f23-account-create-qdwq2"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.242624 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7t8kf"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.269457 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7t8kf"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.278791 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fb78-account-create-zw8ns"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.288687 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementfb78-account-delete-xfxsd"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.294313 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fb78-account-create-zw8ns"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.477870 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhl2\" (UniqueName: \"kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2\") pod \"novacell13b0c-account-delete-2wjl9\" (UID: \"5e48c1a5-da0e-4539-a55c-7c0bbdae2486\") " pod="openstack/novacell13b0c-account-delete-2wjl9" Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.485695 5065 projected.go:194] Error preparing data for projected volume kube-api-access-kdhl2 for pod openstack/novacell13b0c-account-delete-2wjl9: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.498243 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2 podName:5e48c1a5-da0e-4539-a55c-7c0bbdae2486 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:29.498211509 +0000 UTC m=+1331.275593266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kdhl2" (UniqueName: "kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2") pod "novacell13b0c-account-delete-2wjl9" (UID: "5e48c1a5-da0e-4539-a55c-7c0bbdae2486") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.504713 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="050c0e99-7984-43be-8701-84602f0c9294" containerName="galera" containerID="cri-o://4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e" gracePeriod=30 Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.556839 5065 scope.go:117] "RemoveContainer" containerID="87f2173752e5478c6a5d9a8376b7108d4661ca6ba8b7eb3ed95cd8acf5ce88a2" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.866563 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.879956 5065 scope.go:117] "RemoveContainer" containerID="1d09895c6bf96fc0edb25b62fd359eefd1417b0069aceeca025d88f6c6f9d233" Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.919925 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.933322 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.945030 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xn6cj"] Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.945582 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Oct 08 13:40:25 crc kubenswrapper[5065]: E1008 13:40:25.945632 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="050c0e99-7984-43be-8701-84602f0c9294" containerName="galera" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.947328 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.957439 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.958563 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xn6cj"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.971778 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.975456 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi6610-account-delete-vhpbl"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.976616 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.978002 5065 scope.go:117] "RemoveContainer" containerID="f184d017af97fa8ec5dc9b650dbdc3311a404638bdaa61798fc73a62a8d532a3" Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.992131 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6610-account-create-zpq4j"] Oct 08 13:40:25 crc kubenswrapper[5065]: I1008 13:40:25.993069 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.004646 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6610-account-create-zpq4j"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.021462 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-logs\") pod \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.021548 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data\") pod \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.021609 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-combined-ca-bundle\") pod \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.021678 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj7qz\" (UniqueName: \"kubernetes.io/projected/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-kube-api-access-jj7qz\") pod \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.021787 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data-custom\") pod \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\" (UID: \"9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.023102 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-logs" (OuterVolumeSpecName: "logs") pod "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" (UID: "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.040138 5065 scope.go:117] "RemoveContainer" containerID="2482d7697e88b5d83d42327c308805cfe4c53b7df827744880d0db1345958ec8" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.055052 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell13b0c-account-delete-2wjl9"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.061693 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" (UID: "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.070814 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-kube-api-access-jj7qz" (OuterVolumeSpecName: "kube-api-access-jj7qz") pod "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" (UID: "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac"). InnerVolumeSpecName "kube-api-access-jj7qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.079351 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell13b0c-account-delete-2wjl9"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.100134 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-85b95d746c-knffl"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.109778 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-85b95d746c-knffl"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.118492 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.118661 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.120051 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5dd9f968c6-s658p" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:51168->10.217.0.164:9311: read: connection reset by peer" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.120345 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5dd9f968c6-s658p" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:51174->10.217.0.164:9311: read: connection reset by peer" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.127352 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.127920 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-operator-scripts\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.127969 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-vencrypt-tls-certs\") pod \"6470de54-fdec-4648-b941-1031c67f55ca\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.127993 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-default\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128023 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f580765e-50e7-42a1-a798-325b80e29e9d-logs\") pod \"f580765e-50e7-42a1-a798-325b80e29e9d\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128045 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-logs\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128082 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-combined-ca-bundle\") pod \"6470de54-fdec-4648-b941-1031c67f55ca\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128104 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-generated\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128131 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-combined-ca-bundle\") pod \"f580765e-50e7-42a1-a798-325b80e29e9d\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128193 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-public-tls-certs\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128214 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-etc-machine-id\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128228 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kolla-config\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128259 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128294 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data-custom\") pod \"f580765e-50e7-42a1-a798-325b80e29e9d\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128314 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128338 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-scripts\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128385 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-447m6\" (UniqueName: \"kubernetes.io/projected/6470de54-fdec-4648-b941-1031c67f55ca-kube-api-access-447m6\") pod \"6470de54-fdec-4648-b941-1031c67f55ca\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128402 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-combined-ca-bundle\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128443 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-config-data\") pod \"6470de54-fdec-4648-b941-1031c67f55ca\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128468 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data\") pod \"f580765e-50e7-42a1-a798-325b80e29e9d\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128499 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rth65\" (UniqueName: \"kubernetes.io/projected/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kube-api-access-rth65\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128522 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data-custom\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128550 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-nova-novncproxy-tls-certs\") pod \"6470de54-fdec-4648-b941-1031c67f55ca\" (UID: \"6470de54-fdec-4648-b941-1031c67f55ca\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128598 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-internal-tls-certs\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128616 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-secrets\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128633 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp5cw\" (UniqueName: \"kubernetes.io/projected/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-kube-api-access-cp5cw\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128650 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-576d7\" (UniqueName: \"kubernetes.io/projected/f580765e-50e7-42a1-a798-325b80e29e9d-kube-api-access-576d7\") pod \"f580765e-50e7-42a1-a798-325b80e29e9d\" (UID: \"f580765e-50e7-42a1-a798-325b80e29e9d\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128674 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz4hq\" (UniqueName: \"kubernetes.io/projected/14ab13f6-4348-4848-9149-4d1ee240d1ed-kube-api-access-xz4hq\") pod \"14ab13f6-4348-4848-9149-4d1ee240d1ed\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128691 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-combined-ca-bundle\") pod \"14ab13f6-4348-4848-9149-4d1ee240d1ed\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128705 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-combined-ca-bundle\") pod \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\" (UID: \"c0d5e818-6480-4dfb-b8a2-50dc4ec58dad\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128738 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-galera-tls-certs\") pod \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\" (UID: \"03eb50e9-c0b5-4f96-8dd0-27d776f8c71e\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.128755 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-config-data\") pod \"14ab13f6-4348-4848-9149-4d1ee240d1ed\" (UID: \"14ab13f6-4348-4848-9149-4d1ee240d1ed\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.129127 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.129142 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.129152 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj7qz\" (UniqueName: \"kubernetes.io/projected/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-kube-api-access-jj7qz\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.134353 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.149433 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.149481 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-l67ps"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.152094 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f580765e-50e7-42a1-a798-325b80e29e9d-logs" (OuterVolumeSpecName: "logs") pod "f580765e-50e7-42a1-a798-325b80e29e9d" (UID: "f580765e-50e7-42a1-a798-325b80e29e9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.152557 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.152984 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.153946 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-logs" (OuterVolumeSpecName: "logs") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.154644 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.155051 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.160975 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-l67ps"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.162782 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.162829 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-65r2q"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.163958 5065 generic.go:334] "Generic (PLEG): container finished" podID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerID="5064c6efe7bd2fd17eb7c7a569db4fa8aa930bcc4f32d0c03a00b49f045cd8eb" exitCode=0 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.163979 5065 generic.go:334] "Generic (PLEG): container finished" podID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerID="05fb2e1abbe2ed373a22504cfa769a58bda357d67045a5a667156055922438b4" exitCode=2 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.163987 5065 generic.go:334] "Generic (PLEG): container finished" podID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerID="cb47b71cac0e504aa65a98ca53c8d8abc53a58802c1dd955a2eb6fd0b50a5da8" exitCode=0 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.170206 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerDied","Data":"5064c6efe7bd2fd17eb7c7a569db4fa8aa930bcc4f32d0c03a00b49f045cd8eb"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.170273 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerDied","Data":"05fb2e1abbe2ed373a22504cfa769a58bda357d67045a5a667156055922438b4"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.170283 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerDied","Data":"cb47b71cac0e504aa65a98ca53c8d8abc53a58802c1dd955a2eb6fd0b50a5da8"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.178377 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-65r2q"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.190079 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-76vsq"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.190488 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kube-api-access-rth65" (OuterVolumeSpecName: "kube-api-access-rth65") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "kube-api-access-rth65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.191497 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-scripts" (OuterVolumeSpecName: "scripts") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.192000 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ab13f6-4348-4848-9149-4d1ee240d1ed-kube-api-access-xz4hq" (OuterVolumeSpecName: "kube-api-access-xz4hq") pod "14ab13f6-4348-4848-9149-4d1ee240d1ed" (UID: "14ab13f6-4348-4848-9149-4d1ee240d1ed"). InnerVolumeSpecName "kube-api-access-xz4hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.192321 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6470de54-fdec-4648-b941-1031c67f55ca-kube-api-access-447m6" (OuterVolumeSpecName: "kube-api-access-447m6") pod "6470de54-fdec-4648-b941-1031c67f55ca" (UID: "6470de54-fdec-4648-b941-1031c67f55ca"). InnerVolumeSpecName "kube-api-access-447m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.193172 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.197863 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f580765e-50e7-42a1-a798-325b80e29e9d-kube-api-access-576d7" (OuterVolumeSpecName: "kube-api-access-576d7") pod "f580765e-50e7-42a1-a798-325b80e29e9d" (UID: "f580765e-50e7-42a1-a798-325b80e29e9d"). InnerVolumeSpecName "kube-api-access-576d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.200079 5065 generic.go:334] "Generic (PLEG): container finished" podID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerID="4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18" exitCode=0 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.200127 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699db6b76b-fd9ls" event={"ID":"8c5926be-c223-4cbc-b6e3-a16726aa6c84","Type":"ContainerDied","Data":"4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.200152 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699db6b76b-fd9ls" event={"ID":"8c5926be-c223-4cbc-b6e3-a16726aa6c84","Type":"ContainerDied","Data":"2a0ef0905e6e5abfa7a4de4cafbad33dc6822a7a370ab033e241ac6c417803b7"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.200169 5065 scope.go:117] "RemoveContainer" containerID="4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.200249 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699db6b76b-fd9ls" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.201713 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0be58-account-delete-r2zk9"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.204885 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementfb78-account-delete-xfxsd" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.205975 5065 generic.go:334] "Generic (PLEG): container finished" podID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerID="14b3fa78ec0416aa6cf474683bcb75b805d9d36b7d9d4631a80e461d654dfd5a" exitCode=0 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.206105 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbca12dd-73a9-4533-b424-ebaf0c8cec0c","Type":"ContainerDied","Data":"14b3fa78ec0416aa6cf474683bcb75b805d9d36b7d9d4631a80e461d654dfd5a"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.207840 5065 generic.go:334] "Generic (PLEG): container finished" podID="cde619b2-b551-4a41-b2f2-c38f1b507a82" containerID="1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e" exitCode=2 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.208208 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.208332 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cde619b2-b551-4a41-b2f2-c38f1b507a82","Type":"ContainerDied","Data":"1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.209156 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cde619b2-b551-4a41-b2f2-c38f1b507a82","Type":"ContainerDied","Data":"3abc5a3d6943979f9bdb9a56e241cb87a53a2eb6cfcda0cf628e8b354d551042"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.211430 5065 generic.go:334] "Generic (PLEG): container finished" podID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerID="7075a38356042d5e366d951a3d1e78c1621a5061aada6f743076696bfacd2c63" exitCode=0 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.211560 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6e8e72-d895-4018-a176-978d7975d8a6","Type":"ContainerDied","Data":"7075a38356042d5e366d951a3d1e78c1621a5061aada6f743076696bfacd2c63"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.212084 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-be58-account-create-cmqwf"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.215683 5065 generic.go:334] "Generic (PLEG): container finished" podID="8cea80f5-d915-459c-9882-4ce114929ab4" containerID="ed14fb4d41bcd1662b46fbaba5448c7edb1bddf5c128eb75189e64b2a3222c21" exitCode=0 Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.215810 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cea80f5-d915-459c-9882-4ce114929ab4","Type":"ContainerDied","Data":"ed14fb4d41bcd1662b46fbaba5448c7edb1bddf5c128eb75189e64b2a3222c21"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.217399 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementfb78-account-delete-xfxsd" event={"ID":"38fe9b6a-9cdf-4585-a585-474172306dd9","Type":"ContainerDied","Data":"b50196b1626344fe3206ed82794cb37cad012729508bf197c87d73ae7bb024f9"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.217570 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50196b1626344fe3206ed82794cb37cad012729508bf197c87d73ae7bb024f9" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.217548 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementfb78-account-delete-xfxsd" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.218918 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f580765e-50e7-42a1-a798-325b80e29e9d" (UID: "f580765e-50e7-42a1-a798-325b80e29e9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.221818 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-76vsq"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.225130 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-secrets" (OuterVolumeSpecName: "secrets") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.230561 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-combined-ca-bundle\") pod \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.230893 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-config-data\") pod \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.231390 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-certs\") pod \"cde619b2-b551-4a41-b2f2-c38f1b507a82\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.231631 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmqgk\" (UniqueName: \"kubernetes.io/projected/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-kube-api-access-fmqgk\") pod \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\" (UID: \"6ce5c750-265a-4589-8f5e-a9e6a846d0d0\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.231788 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-combined-ca-bundle\") pod \"cde619b2-b551-4a41-b2f2-c38f1b507a82\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.232107 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-config\") pod \"cde619b2-b551-4a41-b2f2-c38f1b507a82\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.232262 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55g5z\" (UniqueName: \"kubernetes.io/projected/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-api-access-55g5z\") pod \"cde619b2-b551-4a41-b2f2-c38f1b507a82\" (UID: \"cde619b2-b551-4a41-b2f2-c38f1b507a82\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.232853 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-447m6\" (UniqueName: \"kubernetes.io/projected/6470de54-fdec-4648-b941-1031c67f55ca-kube-api-access-447m6\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.232966 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdhl2\" (UniqueName: \"kubernetes.io/projected/5e48c1a5-da0e-4539-a55c-7c0bbdae2486-kube-api-access-kdhl2\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233051 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rth65\" (UniqueName: \"kubernetes.io/projected/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kube-api-access-rth65\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233120 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233191 5065 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233257 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-576d7\" (UniqueName: \"kubernetes.io/projected/f580765e-50e7-42a1-a798-325b80e29e9d-kube-api-access-576d7\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233319 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz4hq\" (UniqueName: \"kubernetes.io/projected/14ab13f6-4348-4848-9149-4d1ee240d1ed-kube-api-access-xz4hq\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233426 5065 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233496 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233555 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f580765e-50e7-42a1-a798-325b80e29e9d-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233629 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233689 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.233777 5065 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.234431 5065 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.234529 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.234604 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.234553 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-be58-account-create-cmqwf"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.241167 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-kube-api-access-cp5cw" (OuterVolumeSpecName: "kube-api-access-cp5cw") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "kube-api-access-cp5cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.252352 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.252399 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.252462 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6ce5c750-265a-4589-8f5e-a9e6a846d0d0","Type":"ContainerDied","Data":"543b997de8c7d7ba73f27fa4af86fe456d8210888fea508969854db9f409ef9a"} Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.252542 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b79866b6-r6s8q" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.252627 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.252825 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9966bcdf-t8xzk" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.253018 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.253438 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 13:40:26 crc kubenswrapper[5065]: E1008 13:40:26.255714 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 13:40:26 crc kubenswrapper[5065]: E1008 13:40:26.257501 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 13:40:26 crc kubenswrapper[5065]: E1008 13:40:26.258804 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 13:40:26 crc kubenswrapper[5065]: E1008 13:40:26.258828 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="84d28af9-b1bc-4475-abc6-9c33380349e9" containerName="nova-cell0-conductor-conductor" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.262006 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-kube-api-access-fmqgk" (OuterVolumeSpecName: "kube-api-access-fmqgk") pod "6ce5c750-265a-4589-8f5e-a9e6a846d0d0" (UID: "6ce5c750-265a-4589-8f5e-a9e6a846d0d0"). InnerVolumeSpecName "kube-api-access-fmqgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.264627 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xnw9m"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.264787 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-api-access-55g5z" (OuterVolumeSpecName: "kube-api-access-55g5z") pod "cde619b2-b551-4a41-b2f2-c38f1b507a82" (UID: "cde619b2-b551-4a41-b2f2-c38f1b507a82"). InnerVolumeSpecName "kube-api-access-55g5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.282945 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xnw9m"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.298154 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.316457 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.336791 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxnbv\" (UniqueName: \"kubernetes.io/projected/8c5926be-c223-4cbc-b6e3-a16726aa6c84-kube-api-access-mxnbv\") pod \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.336843 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-config-data\") pod \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.341154 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5926be-c223-4cbc-b6e3-a16726aa6c84-logs\") pod \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.341284 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-scripts\") pod \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.341562 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5926be-c223-4cbc-b6e3-a16726aa6c84-logs" (OuterVolumeSpecName: "logs") pod "8c5926be-c223-4cbc-b6e3-a16726aa6c84" (UID: "8c5926be-c223-4cbc-b6e3-a16726aa6c84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.341636 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj2w6\" (UniqueName: \"kubernetes.io/projected/38fe9b6a-9cdf-4585-a585-474172306dd9-kube-api-access-lj2w6\") pod \"38fe9b6a-9cdf-4585-a585-474172306dd9\" (UID: \"38fe9b6a-9cdf-4585-a585-474172306dd9\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.341691 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-internal-tls-certs\") pod \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.341837 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-combined-ca-bundle\") pod \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.341861 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-public-tls-certs\") pod \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\" (UID: \"8c5926be-c223-4cbc-b6e3-a16726aa6c84\") " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.347546 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c5926be-c223-4cbc-b6e3-a16726aa6c84-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.347584 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp5cw\" (UniqueName: \"kubernetes.io/projected/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-kube-api-access-cp5cw\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.347629 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmqgk\" (UniqueName: \"kubernetes.io/projected/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-kube-api-access-fmqgk\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.347642 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55g5z\" (UniqueName: \"kubernetes.io/projected/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-api-access-55g5z\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.351628 5065 scope.go:117] "RemoveContainer" containerID="bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.377989 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.421598 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5926be-c223-4cbc-b6e3-a16726aa6c84-kube-api-access-mxnbv" (OuterVolumeSpecName: "kube-api-access-mxnbv") pod "8c5926be-c223-4cbc-b6e3-a16726aa6c84" (UID: "8c5926be-c223-4cbc-b6e3-a16726aa6c84"). InnerVolumeSpecName "kube-api-access-mxnbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.422131 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fe9b6a-9cdf-4585-a585-474172306dd9-kube-api-access-lj2w6" (OuterVolumeSpecName: "kube-api-access-lj2w6") pod "38fe9b6a-9cdf-4585-a585-474172306dd9" (UID: "38fe9b6a-9cdf-4585-a585-474172306dd9"). InnerVolumeSpecName "kube-api-access-lj2w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.426532 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-scripts" (OuterVolumeSpecName: "scripts") pod "8c5926be-c223-4cbc-b6e3-a16726aa6c84" (UID: "8c5926be-c223-4cbc-b6e3-a16726aa6c84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.451770 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxnbv\" (UniqueName: \"kubernetes.io/projected/8c5926be-c223-4cbc-b6e3-a16726aa6c84-kube-api-access-mxnbv\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.451981 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.452018 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj2w6\" (UniqueName: \"kubernetes.io/projected/38fe9b6a-9cdf-4585-a585-474172306dd9-kube-api-access-lj2w6\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.452052 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.755727 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.758091 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.758131 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.770862 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f580765e-50e7-42a1-a798-325b80e29e9d" (UID: "f580765e-50e7-42a1-a798-325b80e29e9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.802530 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.835623 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" (UID: "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.849493 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.851823 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-config-data" (OuterVolumeSpecName: "config-data") pod "6470de54-fdec-4648-b941-1031c67f55ca" (UID: "6470de54-fdec-4648-b941-1031c67f55ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.864522 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.864560 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.864571 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.864580 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.864591 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.864601 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.888896 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022c8ac2-ac4f-4994-949d-2f14030e1bda" path="/var/lib/kubelet/pods/022c8ac2-ac4f-4994-949d-2f14030e1bda/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.889519 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e2de891-07e9-44cf-aa13-593ecc5f571a" path="/var/lib/kubelet/pods/0e2de891-07e9-44cf-aa13-593ecc5f571a/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.890031 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18710aa1-a99f-421b-9a4f-694362061773" path="/var/lib/kubelet/pods/18710aa1-a99f-421b-9a4f-694362061773/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.896330 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2" path="/var/lib/kubelet/pods/35c0afa0-44d8-4e3c-9ba7-e09d5b08dbc2/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.898649 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" path="/var/lib/kubelet/pods/38fd97a6-e936-4503-a238-97b63e01a7de/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.899304 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4749b7e4-3896-474d-84b3-8ddf351a24ac" path="/var/lib/kubelet/pods/4749b7e4-3896-474d-84b3-8ddf351a24ac/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.900302 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eba221c-653d-434a-a486-16be41c4a5c4" path="/var/lib/kubelet/pods/4eba221c-653d-434a-a486-16be41c4a5c4/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.901231 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.902157 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d1ad99-ea8d-4b51-bdf5-dfff27b0407d" path="/var/lib/kubelet/pods/57d1ad99-ea8d-4b51-bdf5-dfff27b0407d/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.902659 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a00958a-1aab-44b8-9e6b-13a09ca60d99" path="/var/lib/kubelet/pods/5a00958a-1aab-44b8-9e6b-13a09ca60d99/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.904360 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-config-data" (OuterVolumeSpecName: "config-data") pod "14ab13f6-4348-4848-9149-4d1ee240d1ed" (UID: "14ab13f6-4348-4848-9149-4d1ee240d1ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.905563 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e48c1a5-da0e-4539-a55c-7c0bbdae2486" path="/var/lib/kubelet/pods/5e48c1a5-da0e-4539-a55c-7c0bbdae2486/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.905926 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e89a553-dbd3-47d3-a187-c51aa149175c" path="/var/lib/kubelet/pods/5e89a553-dbd3-47d3-a187-c51aa149175c/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.906670 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "cde619b2-b551-4a41-b2f2-c38f1b507a82" (UID: "cde619b2-b551-4a41-b2f2-c38f1b507a82"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.906814 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1486a5d-5fb5-4055-b5e2-a9eceb919f29" path="/var/lib/kubelet/pods/a1486a5d-5fb5-4055-b5e2-a9eceb919f29/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.908034 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ce5c750-265a-4589-8f5e-a9e6a846d0d0" (UID: "6ce5c750-265a-4589-8f5e-a9e6a846d0d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.910664 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" path="/var/lib/kubelet/pods/b215a42c-d422-4db9-a83e-df79f7bff9e6/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.911351 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d90dc7-e101-4bde-b8b3-e3c13e788004" path="/var/lib/kubelet/pods/b2d90dc7-e101-4bde-b8b3-e3c13e788004/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.912448 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63585b8-a023-4567-85f8-6c232acb89c2" path="/var/lib/kubelet/pods/c63585b8-a023-4567-85f8-6c232acb89c2/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.914250 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" path="/var/lib/kubelet/pods/caf670f8-9cf6-4200-8036-05e9798cad78/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.914592 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ab13f6-4348-4848-9149-4d1ee240d1ed" (UID: "14ab13f6-4348-4848-9149-4d1ee240d1ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.916376 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef5f3c36-30db-4174-90a1-ac7dd45f2207" path="/var/lib/kubelet/pods/ef5f3c36-30db-4174-90a1-ac7dd45f2207/volumes" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.930858 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-config-data" (OuterVolumeSpecName: "config-data") pod "8c5926be-c223-4cbc-b6e3-a16726aa6c84" (UID: "8c5926be-c223-4cbc-b6e3-a16726aa6c84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.932386 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cde619b2-b551-4a41-b2f2-c38f1b507a82" (UID: "cde619b2-b551-4a41-b2f2-c38f1b507a82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.944268 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6470de54-fdec-4648-b941-1031c67f55ca" (UID: "6470de54-fdec-4648-b941-1031c67f55ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966062 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966101 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966113 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966124 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966134 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ab13f6-4348-4848-9149-4d1ee240d1ed-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966144 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966154 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.966166 5065 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.971072 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-config-data" (OuterVolumeSpecName: "config-data") pod "6ce5c750-265a-4589-8f5e-a9e6a846d0d0" (UID: "6ce5c750-265a-4589-8f5e-a9e6a846d0d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.990435 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data" (OuterVolumeSpecName: "config-data") pod "f580765e-50e7-42a1-a798-325b80e29e9d" (UID: "f580765e-50e7-42a1-a798-325b80e29e9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:26 crc kubenswrapper[5065]: I1008 13:40:26.997157 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data" (OuterVolumeSpecName: "config-data") pod "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" (UID: "9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.009489 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8c5926be-c223-4cbc-b6e3-a16726aa6c84" (UID: "8c5926be-c223-4cbc-b6e3-a16726aa6c84"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.009934 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "cde619b2-b551-4a41-b2f2-c38f1b507a82" (UID: "cde619b2-b551-4a41-b2f2-c38f1b507a82"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.014935 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "6470de54-fdec-4648-b941-1031c67f55ca" (UID: "6470de54-fdec-4648-b941-1031c67f55ca"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.015542 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "6470de54-fdec-4648-b941-1031c67f55ca" (UID: "6470de54-fdec-4648-b941-1031c67f55ca"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.036917 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data" (OuterVolumeSpecName: "config-data") pod "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" (UID: "c0d5e818-6480-4dfb-b8a2-50dc4ec58dad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.037950 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" (UID: "03eb50e9-c0b5-4f96-8dd0-27d776f8c71e"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.043996 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c5926be-c223-4cbc-b6e3-a16726aa6c84" (UID: "8c5926be-c223-4cbc-b6e3-a16726aa6c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.058044 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8c5926be-c223-4cbc-b6e3-a16726aa6c84" (UID: "8c5926be-c223-4cbc-b6e3-a16726aa6c84"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067807 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067845 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067859 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067869 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067881 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5926be-c223-4cbc-b6e3-a16726aa6c84-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067892 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f580765e-50e7-42a1-a798-325b80e29e9d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067902 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce5c750-265a-4589-8f5e-a9e6a846d0d0-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067916 5065 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067929 5065 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067940 5065 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde619b2-b551-4a41-b2f2-c38f1b507a82-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.067953 5065 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470de54-fdec-4648-b941-1031c67f55ca-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.123335 5065 scope.go:117] "RemoveContainer" containerID="4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18" Oct 08 13:40:27 crc kubenswrapper[5065]: E1008 13:40:27.123812 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18\": container with ID starting with 4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18 not found: ID does not exist" containerID="4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.123842 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18"} err="failed to get container status \"4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18\": rpc error: code = NotFound desc = could not find container \"4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18\": container with ID starting with 4f5cc840893a37a8358a60214a0e53d57ac801d6d1d7c94a9535b9435aca7b18 not found: ID does not exist" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.123863 5065 scope.go:117] "RemoveContainer" containerID="bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea" Oct 08 13:40:27 crc kubenswrapper[5065]: E1008 13:40:27.124124 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea\": container with ID starting with bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea not found: ID does not exist" containerID="bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.124144 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea"} err="failed to get container status \"bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea\": rpc error: code = NotFound desc = could not find container \"bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea\": container with ID starting with bff50251b4d187b021a54254dfa7d771b946c97dfed06af02871e3a9618634ea not found: ID does not exist" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.124156 5065 scope.go:117] "RemoveContainer" containerID="1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e" Oct 08 13:40:27 crc kubenswrapper[5065]: E1008 13:40:27.124482 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 13:40:27 crc kubenswrapper[5065]: E1008 13:40:27.126252 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 13:40:27 crc kubenswrapper[5065]: E1008 13:40:27.127315 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 13:40:27 crc kubenswrapper[5065]: E1008 13:40:27.127346 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="ovn-northd" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.268639 5065 scope.go:117] "RemoveContainer" containerID="1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e" Oct 08 13:40:27 crc kubenswrapper[5065]: E1008 13:40:27.269582 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e\": container with ID starting with 1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e not found: ID does not exist" containerID="1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.269615 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e"} err="failed to get container status \"1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e\": rpc error: code = NotFound desc = could not find container \"1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e\": container with ID starting with 1f708c017b5683915f1b955e06bbebbafcc9a7518e52663fdd2d0620e2d9d89e not found: ID does not exist" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.269636 5065 scope.go:117] "RemoveContainer" containerID="91afe7ac30e8893277300be9b194d6af0055b047630cb6d73444b3aff6643b7b" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.273918 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.296907 5065 generic.go:334] "Generic (PLEG): container finished" podID="84d28af9-b1bc-4475-abc6-9c33380349e9" containerID="5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41" exitCode=0 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.297013 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"84d28af9-b1bc-4475-abc6-9c33380349e9","Type":"ContainerDied","Data":"5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.297040 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"84d28af9-b1bc-4475-abc6-9c33380349e9","Type":"ContainerDied","Data":"ee52a3178beae32b84f9e8f28818f8f7db3d6bb3530336aff8001369a38af345"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.297052 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee52a3178beae32b84f9e8f28818f8f7db3d6bb3530336aff8001369a38af345" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.303942 5065 generic.go:334] "Generic (PLEG): container finished" podID="493c63a1-0210-4a70-a964-79522491fd05" containerID="4c694dd4a31cd2734ca3f755d48b6352d8867ca981e402024f204138ea07c425" exitCode=0 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.303998 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"493c63a1-0210-4a70-a964-79522491fd05","Type":"ContainerDied","Data":"4c694dd4a31cd2734ca3f755d48b6352d8867ca981e402024f204138ea07c425"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.304023 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"493c63a1-0210-4a70-a964-79522491fd05","Type":"ContainerDied","Data":"1d36fcd053ebdd5c48504d2380f816594b9018d1995c86edd1baa22df7cb73f3"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.304037 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d36fcd053ebdd5c48504d2380f816594b9018d1995c86edd1baa22df7cb73f3" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.307021 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5412-account-delete-fh88w" event={"ID":"36ed295f-7baa-466e-8a26-6d923a84d1b5","Type":"ContainerStarted","Data":"6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.307143 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican5412-account-delete-fh88w" podUID="36ed295f-7baa-466e-8a26-6d923a84d1b5" containerName="mariadb-account-delete" containerID="cri-o://6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53" gracePeriod=30 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.314106 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0be58-account-delete-r2zk9" event={"ID":"1582f178-44ee-4e28-a6d6-1d6a29050b56","Type":"ContainerStarted","Data":"ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.314250 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novacell0be58-account-delete-r2zk9" podUID="1582f178-44ee-4e28-a6d6-1d6a29050b56" containerName="mariadb-account-delete" containerID="cri-o://ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61" gracePeriod=30 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.338379 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cea80f5-d915-459c-9882-4ce114929ab4","Type":"ContainerDied","Data":"9695c1bae924698969321b11dcdf8c9c61d8845781c793deab0657229ef902be"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.338487 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.341474 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican5412-account-delete-fh88w" podStartSLOduration=7.34144934 podStartE2EDuration="7.34144934s" podCreationTimestamp="2025-10-08 13:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:40:27.337620994 +0000 UTC m=+1329.115002751" watchObservedRunningTime="2025-10-08 13:40:27.34144934 +0000 UTC m=+1329.118831097" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.341806 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronca6e-account-delete-g4lj7" event={"ID":"80b97e55-65fa-4e4a-becd-d13dd95bb78a","Type":"ContainerStarted","Data":"8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.342067 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutronca6e-account-delete-g4lj7" podUID="80b97e55-65fa-4e4a-becd-d13dd95bb78a" containerName="mariadb-account-delete" containerID="cri-o://8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113" gracePeriod=30 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.352291 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi6610-account-delete-vhpbl" event={"ID":"b6712e27-2a2f-43a8-8c79-dd7b5090d987","Type":"ContainerStarted","Data":"4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.352553 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novaapi6610-account-delete-vhpbl" podUID="b6712e27-2a2f-43a8-8c79-dd7b5090d987" containerName="mariadb-account-delete" containerID="cri-o://4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553" gracePeriod=30 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.361238 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbca12dd-73a9-4533-b424-ebaf0c8cec0c","Type":"ContainerDied","Data":"047514c2a74a7f3e0fcc53a8746fdb4da8950d50ef46d467841fa27c35066882"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.361300 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="047514c2a74a7f3e0fcc53a8746fdb4da8950d50ef46d467841fa27c35066882" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.370029 5065 generic.go:334] "Generic (PLEG): container finished" podID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerID="0e1463d6dafc9375c3fe50606be6afdbff0ecc4ff69847c5fb6972ed597e6323" exitCode=0 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.370108 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dd9f968c6-s658p" event={"ID":"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7","Type":"ContainerDied","Data":"0e1463d6dafc9375c3fe50606be6afdbff0ecc4ff69847c5fb6972ed597e6323"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.370134 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dd9f968c6-s658p" event={"ID":"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7","Type":"ContainerDied","Data":"6a440f6db7d4aefe96e20e5600720abbe971ccf10a98929d84641ba84ecc709d"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.370197 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a440f6db7d4aefe96e20e5600720abbe971ccf10a98929d84641ba84ecc709d" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.372345 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell0be58-account-delete-r2zk9" podStartSLOduration=6.372333529 podStartE2EDuration="6.372333529s" podCreationTimestamp="2025-10-08 13:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:40:27.348123916 +0000 UTC m=+1329.125505673" watchObservedRunningTime="2025-10-08 13:40:27.372333529 +0000 UTC m=+1329.149715286" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.376502 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutronca6e-account-delete-g4lj7" podStartSLOduration=7.376492524 podStartE2EDuration="7.376492524s" podCreationTimestamp="2025-10-08 13:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:40:27.367467083 +0000 UTC m=+1329.144848840" watchObservedRunningTime="2025-10-08 13:40:27.376492524 +0000 UTC m=+1329.153874281" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.404019 5065 generic.go:334] "Generic (PLEG): container finished" podID="a29eea83-9d60-4101-a351-6f8468a8116c" containerID="3dd840b2a1968cb45aa4333789027815be04db1faa5fe300d7fbe1813965b970" exitCode=0 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.404125 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a29eea83-9d60-4101-a351-6f8468a8116c","Type":"ContainerDied","Data":"3dd840b2a1968cb45aa4333789027815be04db1faa5fe300d7fbe1813965b970"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.404156 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a29eea83-9d60-4101-a351-6f8468a8116c","Type":"ContainerDied","Data":"75ab889577e68466624a72eb529ae5fb83b20894770ccc10fea01b43eba853cc"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.404171 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ab889577e68466624a72eb529ae5fb83b20894770ccc10fea01b43eba853cc" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.411683 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-httpd-run\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.411780 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-scripts\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.411821 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-logs\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.411853 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-combined-ca-bundle\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.411911 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-config-data\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.411931 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-public-tls-certs\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.411960 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.412150 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6e8e72-d895-4018-a176-978d7975d8a6","Type":"ContainerDied","Data":"484a5d6e6ba35b1c285960d8d99c16069f4b85f2614ea43f3930f6af39b3888e"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.412202 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="484a5d6e6ba35b1c285960d8d99c16069f4b85f2614ea43f3930f6af39b3888e" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.413593 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.415384 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-logs" (OuterVolumeSpecName: "logs") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.419987 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.423466 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance8439-account-delete-zlfcc" podUID="f60057cf-9f14-4fbd-b161-e27abcc9c7a5" containerName="mariadb-account-delete" containerID="cri-o://3c36edfa67f7001b81b7944bb3c24a3e22efa071127d19f3ba64444763136734" gracePeriod=30 Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.424011 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8439-account-delete-zlfcc" event={"ID":"f60057cf-9f14-4fbd-b161-e27abcc9c7a5","Type":"ContainerStarted","Data":"3c36edfa67f7001b81b7944bb3c24a3e22efa071127d19f3ba64444763136734"} Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.430839 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-scripts" (OuterVolumeSpecName: "scripts") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.460629 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapi6610-account-delete-vhpbl" podStartSLOduration=7.460602432 podStartE2EDuration="7.460602432s" podCreationTimestamp="2025-10-08 13:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:40:27.398785994 +0000 UTC m=+1329.176167751" watchObservedRunningTime="2025-10-08 13:40:27.460602432 +0000 UTC m=+1329.237984199" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.479030 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance8439-account-delete-zlfcc" podStartSLOduration=7.479012654 podStartE2EDuration="7.479012654s" podCreationTimestamp="2025-10-08 13:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 13:40:27.44290052 +0000 UTC m=+1329.220282277" watchObservedRunningTime="2025-10-08 13:40:27.479012654 +0000 UTC m=+1329.256394411" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.483084 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-config-data" (OuterVolumeSpecName: "config-data") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.509211 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.516183 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bshtl\" (UniqueName: \"kubernetes.io/projected/8cea80f5-d915-459c-9882-4ce114929ab4-kube-api-access-bshtl\") pod \"8cea80f5-d915-459c-9882-4ce114929ab4\" (UID: \"8cea80f5-d915-459c-9882-4ce114929ab4\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.518426 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.518451 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.518460 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.518469 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.518496 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.518508 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cea80f5-d915-459c-9882-4ce114929ab4-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.525890 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea80f5-d915-459c-9882-4ce114929ab4-kube-api-access-bshtl" (OuterVolumeSpecName: "kube-api-access-bshtl") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "kube-api-access-bshtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.547912 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.551087 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8cea80f5-d915-459c-9882-4ce114929ab4" (UID: "8cea80f5-d915-459c-9882-4ce114929ab4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.620334 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.620382 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bshtl\" (UniqueName: \"kubernetes.io/projected/8cea80f5-d915-459c-9882-4ce114929ab4-kube-api-access-bshtl\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.621194 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cea80f5-d915-459c-9882-4ce114929ab4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.621209 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.639396 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.652492 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementfb78-account-delete-xfxsd"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.658435 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.662732 5065 scope.go:117] "RemoveContainer" containerID="ed14fb4d41bcd1662b46fbaba5448c7edb1bddf5c128eb75189e64b2a3222c21" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.671532 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placementfb78-account-delete-xfxsd"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.676956 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.691015 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.711536 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.731309 5065 scope.go:117] "RemoveContainer" containerID="f0213aea8bcbd4774261ea9ed66c3a96af420735c302c67b2721cd71e709425b" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.782820 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.802890 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828386 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/493c63a1-0210-4a70-a964-79522491fd05-logs\") pod \"493c63a1-0210-4a70-a964-79522491fd05\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828467 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhbd2\" (UniqueName: \"kubernetes.io/projected/493c63a1-0210-4a70-a964-79522491fd05-kube-api-access-xhbd2\") pod \"493c63a1-0210-4a70-a964-79522491fd05\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828504 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-combined-ca-bundle\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828533 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-scripts\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828567 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-config-data\") pod \"a29eea83-9d60-4101-a351-6f8468a8116c\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828599 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-public-tls-certs\") pod \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828622 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-logs\") pod \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828659 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-combined-ca-bundle\") pod \"a29eea83-9d60-4101-a351-6f8468a8116c\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828696 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-combined-ca-bundle\") pod \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828761 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnf6r\" (UniqueName: \"kubernetes.io/projected/fa6e8e72-d895-4018-a176-978d7975d8a6-kube-api-access-xnf6r\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828795 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-httpd-run\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828837 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-internal-tls-certs\") pod \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828863 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-config-data\") pod \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828896 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-kolla-config\") pod \"a29eea83-9d60-4101-a351-6f8468a8116c\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828923 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/84d28af9-b1bc-4475-abc6-9c33380349e9-kube-api-access-qqflt\") pod \"84d28af9-b1bc-4475-abc6-9c33380349e9\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828955 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.828976 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-logs\") pod \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829021 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-config-data\") pod \"84d28af9-b1bc-4475-abc6-9c33380349e9\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829064 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-internal-tls-certs\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829100 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-combined-ca-bundle\") pod \"493c63a1-0210-4a70-a964-79522491fd05\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829136 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-logs\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829167 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-nova-metadata-tls-certs\") pod \"493c63a1-0210-4a70-a964-79522491fd05\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829210 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-combined-ca-bundle\") pod \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829234 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-public-tls-certs\") pod \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829277 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-combined-ca-bundle\") pod \"84d28af9-b1bc-4475-abc6-9c33380349e9\" (UID: \"84d28af9-b1bc-4475-abc6-9c33380349e9\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829308 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljh6k\" (UniqueName: \"kubernetes.io/projected/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-kube-api-access-ljh6k\") pod \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829333 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data-custom\") pod \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829361 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdjn9\" (UniqueName: \"kubernetes.io/projected/a29eea83-9d60-4101-a351-6f8468a8116c-kube-api-access-hdjn9\") pod \"a29eea83-9d60-4101-a351-6f8468a8116c\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.829392 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data\") pod \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\" (UID: \"2a6ab417-1dfb-4427-a34e-fd8cf995b4c7\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.833747 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-config-data\") pod \"493c63a1-0210-4a70-a964-79522491fd05\" (UID: \"493c63a1-0210-4a70-a964-79522491fd05\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.833792 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfwl9\" (UniqueName: \"kubernetes.io/projected/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-kube-api-access-dfwl9\") pod \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.833828 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-internal-tls-certs\") pod \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\" (UID: \"bbca12dd-73a9-4533-b424-ebaf0c8cec0c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.833862 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-config-data\") pod \"fa6e8e72-d895-4018-a176-978d7975d8a6\" (UID: \"fa6e8e72-d895-4018-a176-978d7975d8a6\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.833886 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-memcached-tls-certs\") pod \"a29eea83-9d60-4101-a351-6f8468a8116c\" (UID: \"a29eea83-9d60-4101-a351-6f8468a8116c\") " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.835838 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "a29eea83-9d60-4101-a351-6f8468a8116c" (UID: "a29eea83-9d60-4101-a351-6f8468a8116c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.835981 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-logs" (OuterVolumeSpecName: "logs") pod "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" (UID: "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.836657 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/493c63a1-0210-4a70-a964-79522491fd05-logs" (OuterVolumeSpecName: "logs") pod "493c63a1-0210-4a70-a964-79522491fd05" (UID: "493c63a1-0210-4a70-a964-79522491fd05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.839181 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-kube-api-access-ljh6k" (OuterVolumeSpecName: "kube-api-access-ljh6k") pod "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" (UID: "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7"). InnerVolumeSpecName "kube-api-access-ljh6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.840573 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/493c63a1-0210-4a70-a964-79522491fd05-kube-api-access-xhbd2" (OuterVolumeSpecName: "kube-api-access-xhbd2") pod "493c63a1-0210-4a70-a964-79522491fd05" (UID: "493c63a1-0210-4a70-a964-79522491fd05"). InnerVolumeSpecName "kube-api-access-xhbd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.854996 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-699db6b76b-fd9ls"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.855067 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-699db6b76b-fd9ls"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.858799 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.859084 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-config-data" (OuterVolumeSpecName: "config-data") pod "a29eea83-9d60-4101-a351-6f8468a8116c" (UID: "a29eea83-9d60-4101-a351-6f8468a8116c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.870170 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.871116 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" (UID: "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.872023 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-logs" (OuterVolumeSpecName: "logs") pod "bbca12dd-73a9-4533-b424-ebaf0c8cec0c" (UID: "bbca12dd-73a9-4533-b424-ebaf0c8cec0c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.873249 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-logs" (OuterVolumeSpecName: "logs") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.876350 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5d9966bcdf-t8xzk"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.897883 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5d9966bcdf-t8xzk"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.899650 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-kube-api-access-dfwl9" (OuterVolumeSpecName: "kube-api-access-dfwl9") pod "bbca12dd-73a9-4533-b424-ebaf0c8cec0c" (UID: "bbca12dd-73a9-4533-b424-ebaf0c8cec0c"). InnerVolumeSpecName "kube-api-access-dfwl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.899723 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29eea83-9d60-4101-a351-6f8468a8116c-kube-api-access-hdjn9" (OuterVolumeSpecName: "kube-api-access-hdjn9") pod "a29eea83-9d60-4101-a351-6f8468a8116c" (UID: "a29eea83-9d60-4101-a351-6f8468a8116c"). InnerVolumeSpecName "kube-api-access-hdjn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.899765 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d28af9-b1bc-4475-abc6-9c33380349e9-kube-api-access-qqflt" (OuterVolumeSpecName: "kube-api-access-qqflt") pod "84d28af9-b1bc-4475-abc6-9c33380349e9" (UID: "84d28af9-b1bc-4475-abc6-9c33380349e9"). InnerVolumeSpecName "kube-api-access-qqflt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.918885 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa6e8e72-d895-4018-a176-978d7975d8a6-kube-api-access-xnf6r" (OuterVolumeSpecName: "kube-api-access-xnf6r") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "kube-api-access-xnf6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.928466 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945546 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/493c63a1-0210-4a70-a964-79522491fd05-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945817 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhbd2\" (UniqueName: \"kubernetes.io/projected/493c63a1-0210-4a70-a964-79522491fd05-kube-api-access-xhbd2\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945860 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945875 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945886 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnf6r\" (UniqueName: \"kubernetes.io/projected/fa6e8e72-d895-4018-a176-978d7975d8a6-kube-api-access-xnf6r\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945897 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945908 5065 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a29eea83-9d60-4101-a351-6f8468a8116c-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945918 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/84d28af9-b1bc-4475-abc6-9c33380349e9-kube-api-access-qqflt\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945958 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945970 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945979 5065 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6e8e72-d895-4018-a176-978d7975d8a6-logs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.945989 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljh6k\" (UniqueName: \"kubernetes.io/projected/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-kube-api-access-ljh6k\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.946000 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.946010 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdjn9\" (UniqueName: \"kubernetes.io/projected/a29eea83-9d60-4101-a351-6f8468a8116c-kube-api-access-hdjn9\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.946020 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfwl9\" (UniqueName: \"kubernetes.io/projected/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-kube-api-access-dfwl9\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.946044 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.946066 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-scripts" (OuterVolumeSpecName: "scripts") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.959674 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84d28af9-b1bc-4475-abc6-9c33380349e9" (UID: "84d28af9-b1bc-4475-abc6-9c33380349e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.960088 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.964835 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.970471 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.980494 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.986003 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:40:27 crc kubenswrapper[5065]: I1008 13:40:27.998794 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.008203 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.011566 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data" (OuterVolumeSpecName: "config-data") pod "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" (UID: "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.031840 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbca12dd-73a9-4533-b424-ebaf0c8cec0c" (UID: "bbca12dd-73a9-4533-b424-ebaf0c8cec0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.035450 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bbca12dd-73a9-4533-b424-ebaf0c8cec0c" (UID: "bbca12dd-73a9-4533-b424-ebaf0c8cec0c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.040473 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a29eea83-9d60-4101-a351-6f8468a8116c" (UID: "a29eea83-9d60-4101-a351-6f8468a8116c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.045024 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.047924 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.047956 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.047988 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.048032 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.048043 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.048054 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.048064 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.048266 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.048382 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data podName:a416f725-cd7c-4bd8-9123-28cad18157d9 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:36.048364359 +0000 UTC m=+1337.825746116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data") pod "rabbitmq-cell1-server-0" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9") : configmap "rabbitmq-cell1-config-data" not found Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.055161 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.064207 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.065719 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.066508 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-config-data" (OuterVolumeSpecName: "config-data") pod "493c63a1-0210-4a70-a964-79522491fd05" (UID: "493c63a1-0210-4a70-a964-79522491fd05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.071867 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.072138 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-config-data" (OuterVolumeSpecName: "config-data") pod "84d28af9-b1bc-4475-abc6-9c33380349e9" (UID: "84d28af9-b1bc-4475-abc6-9c33380349e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.077638 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7b79866b6-r6s8q"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.082105 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7b79866b6-r6s8q"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.114593 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "493c63a1-0210-4a70-a964-79522491fd05" (UID: "493c63a1-0210-4a70-a964-79522491fd05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.115153 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bbca12dd-73a9-4533-b424-ebaf0c8cec0c" (UID: "bbca12dd-73a9-4533-b424-ebaf0c8cec0c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.116722 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" (UID: "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.121821 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-config-data" (OuterVolumeSpecName: "config-data") pod "bbca12dd-73a9-4533-b424-ebaf0c8cec0c" (UID: "bbca12dd-73a9-4533-b424-ebaf0c8cec0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.129048 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" (UID: "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.130381 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-config-data" (OuterVolumeSpecName: "config-data") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149752 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149786 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149799 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbca12dd-73a9-4533-b424-ebaf0c8cec0c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149809 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d28af9-b1bc-4475-abc6-9c33380349e9-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149821 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149832 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149843 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149854 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.149864 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.150587 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "493c63a1-0210-4a70-a964-79522491fd05" (UID: "493c63a1-0210-4a70-a964-79522491fd05"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.150850 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" (UID: "2a6ab417-1dfb-4427-a34e-fd8cf995b4c7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.158143 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "a29eea83-9d60-4101-a351-6f8468a8116c" (UID: "a29eea83-9d60-4101-a351-6f8468a8116c"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.158549 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fa6e8e72-d895-4018-a176-978d7975d8a6" (UID: "fa6e8e72-d895-4018-a176-978d7975d8a6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.252721 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6e8e72-d895-4018-a176-978d7975d8a6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.253070 5065 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/493c63a1-0210-4a70-a964-79522491fd05-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.253085 5065 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a29eea83-9d60-4101-a351-6f8468a8116c-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.253095 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.443533 5065 generic.go:334] "Generic (PLEG): container finished" podID="050c0e99-7984-43be-8701-84602f0c9294" containerID="4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e" exitCode=0 Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.443591 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"050c0e99-7984-43be-8701-84602f0c9294","Type":"ContainerDied","Data":"4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e"} Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.448166 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c2f3965-f057-4b1d-bbc9-7235ac48ed49/ovn-northd/0.log" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.448198 5065 generic.go:334] "Generic (PLEG): container finished" podID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" exitCode=139 Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.448238 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c2f3965-f057-4b1d-bbc9-7235ac48ed49","Type":"ContainerDied","Data":"cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe"} Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.458765 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.458889 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.460348 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.460433 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.460476 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dd9f968c6-s658p" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.460519 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.557868 5065 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.557960 5065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data podName:ae3d89be-0a42-4a3d-914c-3bff67bd37b4 nodeName:}" failed. No retries permitted until 2025-10-08 13:40:36.557929412 +0000 UTC m=+1338.335311169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data") pod "rabbitmq-server-0" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4") : configmap "rabbitmq-config-data" not found Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.572651 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c2f3965-f057-4b1d-bbc9-7235ac48ed49/ovn-northd/0.log" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.572711 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.658980 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8xtv\" (UniqueName: \"kubernetes.io/projected/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-kube-api-access-n8xtv\") pod \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.659034 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-combined-ca-bundle\") pod \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.659114 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-rundir\") pod \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.659133 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-northd-tls-certs\") pod \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.659213 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-config\") pod \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.659235 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-metrics-certs-tls-certs\") pod \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.659283 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-scripts\") pod \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\" (UID: \"2c2f3965-f057-4b1d-bbc9-7235ac48ed49\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.660183 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-scripts" (OuterVolumeSpecName: "scripts") pod "2c2f3965-f057-4b1d-bbc9-7235ac48ed49" (UID: "2c2f3965-f057-4b1d-bbc9-7235ac48ed49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.661564 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-config" (OuterVolumeSpecName: "config") pod "2c2f3965-f057-4b1d-bbc9-7235ac48ed49" (UID: "2c2f3965-f057-4b1d-bbc9-7235ac48ed49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.661911 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2c2f3965-f057-4b1d-bbc9-7235ac48ed49" (UID: "2c2f3965-f057-4b1d-bbc9-7235ac48ed49"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.664631 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-kube-api-access-n8xtv" (OuterVolumeSpecName: "kube-api-access-n8xtv") pod "2c2f3965-f057-4b1d-bbc9-7235ac48ed49" (UID: "2c2f3965-f057-4b1d-bbc9-7235ac48ed49"). InnerVolumeSpecName "kube-api-access-n8xtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.685533 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c2f3965-f057-4b1d-bbc9-7235ac48ed49" (UID: "2c2f3965-f057-4b1d-bbc9-7235ac48ed49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.725006 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2c2f3965-f057-4b1d-bbc9-7235ac48ed49" (UID: "2c2f3965-f057-4b1d-bbc9-7235ac48ed49"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.736984 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "2c2f3965-f057-4b1d-bbc9-7235ac48ed49" (UID: "2c2f3965-f057-4b1d-bbc9-7235ac48ed49"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.760849 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.760880 5065 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.760890 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.760899 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8xtv\" (UniqueName: \"kubernetes.io/projected/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-kube-api-access-n8xtv\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.760908 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.760916 5065 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.760924 5065 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c2f3965-f057-4b1d-bbc9-7235ac48ed49-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.848352 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.865264 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/050c0e99-7984-43be-8701-84602f0c9294-config-data-generated\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.865871 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050c0e99-7984-43be-8701-84602f0c9294-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.865950 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-combined-ca-bundle\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.866028 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-secrets\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.867123 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-kolla-config\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.867215 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-galera-tls-certs\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.867891 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.868198 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zwvr\" (UniqueName: \"kubernetes.io/projected/050c0e99-7984-43be-8701-84602f0c9294-kube-api-access-5zwvr\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.868287 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-config-data-default\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.868348 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-operator-scripts\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.868430 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"050c0e99-7984-43be-8701-84602f0c9294\" (UID: \"050c0e99-7984-43be-8701-84602f0c9294\") " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.870669 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.871926 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050c0e99-7984-43be-8701-84602f0c9294-kube-api-access-5zwvr" (OuterVolumeSpecName: "kube-api-access-5zwvr") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "kube-api-access-5zwvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.876173 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.880186 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-secrets" (OuterVolumeSpecName: "secrets") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.881098 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.885517 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.886077 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.886180 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zwvr\" (UniqueName: \"kubernetes.io/projected/050c0e99-7984-43be-8701-84602f0c9294-kube-api-access-5zwvr\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.886361 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.886373 5065 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.886382 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/050c0e99-7984-43be-8701-84602f0c9294-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.886394 5065 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.886405 5065 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/050c0e99-7984-43be-8701-84602f0c9294-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.889148 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.889193 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.894399 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.894691 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.897466 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" path="/var/lib/kubelet/pods/03eb50e9-c0b5-4f96-8dd0-27d776f8c71e/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.898523 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ab13f6-4348-4848-9149-4d1ee240d1ed" path="/var/lib/kubelet/pods/14ab13f6-4348-4848-9149-4d1ee240d1ed/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.899084 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fe9b6a-9cdf-4585-a585-474172306dd9" path="/var/lib/kubelet/pods/38fe9b6a-9cdf-4585-a585-474172306dd9/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.900130 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6470de54-fdec-4648-b941-1031c67f55ca" path="/var/lib/kubelet/pods/6470de54-fdec-4648-b941-1031c67f55ca/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.900626 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce5c750-265a-4589-8f5e-a9e6a846d0d0" path="/var/lib/kubelet/pods/6ce5c750-265a-4589-8f5e-a9e6a846d0d0/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.901103 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" path="/var/lib/kubelet/pods/8c5926be-c223-4cbc-b6e3-a16726aa6c84/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.902827 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" path="/var/lib/kubelet/pods/8cea80f5-d915-459c-9882-4ce114929ab4/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.904513 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" path="/var/lib/kubelet/pods/9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.905057 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" path="/var/lib/kubelet/pods/c0d5e818-6480-4dfb-b8a2-50dc4ec58dad/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.905648 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:28 crc kubenswrapper[5065]: E1008 13:40:28.905719 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.906754 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.909195 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde619b2-b551-4a41-b2f2-c38f1b507a82" path="/var/lib/kubelet/pods/cde619b2-b551-4a41-b2f2-c38f1b507a82/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.909713 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" path="/var/lib/kubelet/pods/f580765e-50e7-42a1-a798-325b80e29e9d/volumes" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.920259 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.922682 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.923669 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.938157 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "050c0e99-7984-43be-8701-84602f0c9294" (UID: "050c0e99-7984-43be-8701-84602f0c9294"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.941116 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5dd9f968c6-s658p"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.951598 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5dd9f968c6-s658p"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.962648 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.968336 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.974170 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.980235 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.988110 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.988144 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.988154 5065 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/050c0e99-7984-43be-8701-84602f0c9294-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.992850 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:40:28 crc kubenswrapper[5065]: I1008 13:40:28.998783 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.005819 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.006481 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.011070 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.014404 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089258 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-scripts\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089367 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-config-data\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089470 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-public-tls-certs\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089586 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-fernet-keys\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089615 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-credential-keys\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089647 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv274\" (UniqueName: \"kubernetes.io/projected/0643aa92-2649-4c41-b16e-9a05aac93f35-kube-api-access-bv274\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089677 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-internal-tls-certs\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.089726 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-combined-ca-bundle\") pod \"0643aa92-2649-4c41-b16e-9a05aac93f35\" (UID: \"0643aa92-2649-4c41-b16e-9a05aac93f35\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.090754 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.104453 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0643aa92-2649-4c41-b16e-9a05aac93f35-kube-api-access-bv274" (OuterVolumeSpecName: "kube-api-access-bv274") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "kube-api-access-bv274". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.104647 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.105106 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-scripts" (OuterVolumeSpecName: "scripts") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.123452 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.133633 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.135167 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.141986 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-config-data" (OuterVolumeSpecName: "config-data") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.142027 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0643aa92-2649-4c41-b16e-9a05aac93f35" (UID: "0643aa92-2649-4c41-b16e-9a05aac93f35"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192044 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192088 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192102 5065 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192114 5065 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192128 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv274\" (UniqueName: \"kubernetes.io/projected/0643aa92-2649-4c41-b16e-9a05aac93f35-kube-api-access-bv274\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192136 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192145 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.192155 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0643aa92-2649-4c41-b16e-9a05aac93f35-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.209027 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:40:29 crc kubenswrapper[5065]: E1008 13:40:29.276567 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae3d89be_0a42_4a3d_914c_3bff67bd37b4.slice/crio-conmon-c184ff5407110302a6125a5a613f8a91d5febe7a6d10d230bf471f3d0f46b2f4.scope\": RecentStats: unable to find data in memory cache]" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.292968 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a416f725-cd7c-4bd8-9123-28cad18157d9-erlang-cookie-secret\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293038 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvspz\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-kube-api-access-dvspz\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293072 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293094 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-plugins\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293109 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293125 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-confd\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293200 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a416f725-cd7c-4bd8-9123-28cad18157d9-pod-info\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293238 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-erlang-cookie\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293256 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-server-conf\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293280 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-plugins-conf\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.293305 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-tls\") pod \"a416f725-cd7c-4bd8-9123-28cad18157d9\" (UID: \"a416f725-cd7c-4bd8-9123-28cad18157d9\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.294063 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.294512 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.295327 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.315371 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.315463 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-kube-api-access-dvspz" (OuterVolumeSpecName: "kube-api-access-dvspz") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "kube-api-access-dvspz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.315489 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a416f725-cd7c-4bd8-9123-28cad18157d9-pod-info" (OuterVolumeSpecName: "pod-info") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.315493 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.315830 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a416f725-cd7c-4bd8-9123-28cad18157d9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.317916 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data" (OuterVolumeSpecName: "config-data") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.333138 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-server-conf" (OuterVolumeSpecName: "server-conf") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.394635 5065 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a416f725-cd7c-4bd8-9123-28cad18157d9-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.394985 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395000 5065 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395011 5065 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395022 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395034 5065 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a416f725-cd7c-4bd8-9123-28cad18157d9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395044 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvspz\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-kube-api-access-dvspz\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395055 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a416f725-cd7c-4bd8-9123-28cad18157d9-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395065 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.395097 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.396993 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85b95d746c-knffl" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.169:8080/healthcheck\": dial tcp 10.217.0.169:8080: i/o timeout" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.396993 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85b95d746c-knffl" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.169:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.398852 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a416f725-cd7c-4bd8-9123-28cad18157d9" (UID: "a416f725-cd7c-4bd8-9123-28cad18157d9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.427722 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.487707 5065 generic.go:334] "Generic (PLEG): container finished" podID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerID="7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a" exitCode=0 Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.487777 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a416f725-cd7c-4bd8-9123-28cad18157d9","Type":"ContainerDied","Data":"7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a"} Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.487817 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a416f725-cd7c-4bd8-9123-28cad18157d9","Type":"ContainerDied","Data":"1bf3f683eb743a951ba5b04ff9c520ea57c867de4adaffd0fd976dbfd55f3bdb"} Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.487835 5065 scope.go:117] "RemoveContainer" containerID="7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.488367 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.496644 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.496674 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a416f725-cd7c-4bd8-9123-28cad18157d9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.498784 5065 generic.go:334] "Generic (PLEG): container finished" podID="0643aa92-2649-4c41-b16e-9a05aac93f35" containerID="c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267" exitCode=0 Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.498824 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77965b6945-w5rpz" event={"ID":"0643aa92-2649-4c41-b16e-9a05aac93f35","Type":"ContainerDied","Data":"c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267"} Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.498862 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77965b6945-w5rpz" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.498868 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77965b6945-w5rpz" event={"ID":"0643aa92-2649-4c41-b16e-9a05aac93f35","Type":"ContainerDied","Data":"ad2dd9b9c6b73b0c678557c3cf3bffeb5056772e3cb8dbed1b75ac55b8668549"} Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.503895 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"050c0e99-7984-43be-8701-84602f0c9294","Type":"ContainerDied","Data":"e47b95346f5fb41aa8812383f270346e803298adb9d1ded1f9e88d6c7a33b368"} Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.504162 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.506483 5065 generic.go:334] "Generic (PLEG): container finished" podID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerID="c184ff5407110302a6125a5a613f8a91d5febe7a6d10d230bf471f3d0f46b2f4" exitCode=0 Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.506537 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae3d89be-0a42-4a3d-914c-3bff67bd37b4","Type":"ContainerDied","Data":"c184ff5407110302a6125a5a613f8a91d5febe7a6d10d230bf471f3d0f46b2f4"} Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.508175 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2c2f3965-f057-4b1d-bbc9-7235ac48ed49/ovn-northd/0.log" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.508210 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2c2f3965-f057-4b1d-bbc9-7235ac48ed49","Type":"ContainerDied","Data":"5b32d2e0dacbb713ee7c85badd4d4089098997891f4ca797b74905c8a9b33eed"} Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.509285 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.566563 5065 scope.go:117] "RemoveContainer" containerID="8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.576122 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.590852 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.595592 5065 scope.go:117] "RemoveContainer" containerID="7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a" Oct 08 13:40:29 crc kubenswrapper[5065]: E1008 13:40:29.595976 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a\": container with ID starting with 7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a not found: ID does not exist" containerID="7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.596021 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a"} err="failed to get container status \"7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a\": rpc error: code = NotFound desc = could not find container \"7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a\": container with ID starting with 7ac59a251e4e8b65634fce3722c85c666a539268df3ef42910aef18edd46491a not found: ID does not exist" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.596049 5065 scope.go:117] "RemoveContainer" containerID="8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.596059 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: E1008 13:40:29.596326 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009\": container with ID starting with 8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009 not found: ID does not exist" containerID="8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.596358 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009"} err="failed to get container status \"8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009\": rpc error: code = NotFound desc = could not find container \"8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009\": container with ID starting with 8873473b6c6d45cdc9c68e469a3d5b5e234302c288241e9875c11f2360575009 not found: ID does not exist" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.596378 5065 scope.go:117] "RemoveContainer" containerID="c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.606760 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-77965b6945-w5rpz"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.614082 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-77965b6945-w5rpz"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.618532 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.640333 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.645063 5065 scope.go:117] "RemoveContainer" containerID="c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267" Oct 08 13:40:29 crc kubenswrapper[5065]: E1008 13:40:29.645783 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267\": container with ID starting with c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267 not found: ID does not exist" containerID="c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.645815 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267"} err="failed to get container status \"c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267\": rpc error: code = NotFound desc = could not find container \"c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267\": container with ID starting with c5ecc0498b80f64e750be4906d2d2f254a379f8816d1ff34f5e046c74b53a267 not found: ID does not exist" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.645838 5065 scope.go:117] "RemoveContainer" containerID="4879ade2ad03c5af7ff4d4d4202d6af725543287d1ec07f3078406f5bb64df6e" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.646697 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.655307 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.681654 5065 scope.go:117] "RemoveContainer" containerID="d1bfe7a89169420e3afd6113c1442671a263c0dfbedac845f19da59f06dfd847" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698315 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8q9c\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-kube-api-access-s8q9c\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698403 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-erlang-cookie\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698536 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-confd\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698569 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-pod-info\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698637 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-server-conf\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698667 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698705 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-erlang-cookie-secret\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698761 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-plugins-conf\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698796 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698837 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-tls\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.698867 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-plugins\") pod \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\" (UID: \"ae3d89be-0a42-4a3d-914c-3bff67bd37b4\") " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.699097 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.699547 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.699685 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.701891 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.702460 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-kube-api-access-s8q9c" (OuterVolumeSpecName: "kube-api-access-s8q9c") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "kube-api-access-s8q9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.703978 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-pod-info" (OuterVolumeSpecName: "pod-info") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.705580 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.710812 5065 scope.go:117] "RemoveContainer" containerID="2c302eb0bc6fc03213ec7ffaa2e249422a78eeddefdda92efac176790ead6fa9" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.712216 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.712449 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.720665 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data" (OuterVolumeSpecName: "config-data") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.736252 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-server-conf" (OuterVolumeSpecName: "server-conf") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.768053 5065 scope.go:117] "RemoveContainer" containerID="cd7a89294fe370f8f8e4fa9239f9e8afee5cb7f783b16a353fb49a9e06106fbe" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.786028 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ae3d89be-0a42-4a3d-914c-3bff67bd37b4" (UID: "ae3d89be-0a42-4a3d-914c-3bff67bd37b4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801272 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8q9c\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-kube-api-access-s8q9c\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801308 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801320 5065 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801328 5065 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801359 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801371 5065 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801381 5065 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801392 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801404 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.801430 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae3d89be-0a42-4a3d-914c-3bff67bd37b4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.817407 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 08 13:40:29 crc kubenswrapper[5065]: I1008 13:40:29.903297 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.538151 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.539204 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ae3d89be-0a42-4a3d-914c-3bff67bd37b4","Type":"ContainerDied","Data":"a27a0b5894255df8201306c7b2fb3a57d2f9bbbc3d9b43efbe6e93aa161c70f1"} Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.539246 5065 scope.go:117] "RemoveContainer" containerID="c184ff5407110302a6125a5a613f8a91d5febe7a6d10d230bf471f3d0f46b2f4" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.570887 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.579902 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.581010 5065 scope.go:117] "RemoveContainer" containerID="264b1ee903df6ce1a97e07b64d812c86782e3a58f2f09c609b3c81d9d02ee22a" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.884551 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050c0e99-7984-43be-8701-84602f0c9294" path="/var/lib/kubelet/pods/050c0e99-7984-43be-8701-84602f0c9294/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.885314 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0643aa92-2649-4c41-b16e-9a05aac93f35" path="/var/lib/kubelet/pods/0643aa92-2649-4c41-b16e-9a05aac93f35/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.885941 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" path="/var/lib/kubelet/pods/2a6ab417-1dfb-4427-a34e-fd8cf995b4c7/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.887366 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" path="/var/lib/kubelet/pods/2c2f3965-f057-4b1d-bbc9-7235ac48ed49/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.888074 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="493c63a1-0210-4a70-a964-79522491fd05" path="/var/lib/kubelet/pods/493c63a1-0210-4a70-a964-79522491fd05/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.889367 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d28af9-b1bc-4475-abc6-9c33380349e9" path="/var/lib/kubelet/pods/84d28af9-b1bc-4475-abc6-9c33380349e9/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.889971 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29eea83-9d60-4101-a351-6f8468a8116c" path="/var/lib/kubelet/pods/a29eea83-9d60-4101-a351-6f8468a8116c/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.890603 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" path="/var/lib/kubelet/pods/a416f725-cd7c-4bd8-9123-28cad18157d9/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.892181 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" path="/var/lib/kubelet/pods/ae3d89be-0a42-4a3d-914c-3bff67bd37b4/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.892751 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" path="/var/lib/kubelet/pods/bbca12dd-73a9-4533-b424-ebaf0c8cec0c/volumes" Oct 08 13:40:30 crc kubenswrapper[5065]: I1008 13:40:30.893723 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" path="/var/lib/kubelet/pods/fa6e8e72-d895-4018-a176-978d7975d8a6/volumes" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.569121 5065 generic.go:334] "Generic (PLEG): container finished" podID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerID="b435edf6aa605158448d0a1d1edd6a7ab24f12bb0cc39517202571a1d63578f3" exitCode=0 Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.569335 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerDied","Data":"b435edf6aa605158448d0a1d1edd6a7ab24f12bb0cc39517202571a1d63578f3"} Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.639771 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.729271 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-ceilometer-tls-certs\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.729318 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-scripts\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.729381 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-sg-core-conf-yaml\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.729406 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-log-httpd\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.729543 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-config-data\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.730156 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.730234 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-run-httpd\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.730263 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqvmj\" (UniqueName: \"kubernetes.io/projected/73ec06a5-eadd-4545-a157-1aa731eabe13-kube-api-access-bqvmj\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.730572 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.730661 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-combined-ca-bundle\") pod \"73ec06a5-eadd-4545-a157-1aa731eabe13\" (UID: \"73ec06a5-eadd-4545-a157-1aa731eabe13\") " Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.731028 5065 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.731056 5065 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73ec06a5-eadd-4545-a157-1aa731eabe13-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.734526 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-scripts" (OuterVolumeSpecName: "scripts") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.734963 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ec06a5-eadd-4545-a157-1aa731eabe13-kube-api-access-bqvmj" (OuterVolumeSpecName: "kube-api-access-bqvmj") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "kube-api-access-bqvmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.750784 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.774707 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.794347 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.815597 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-config-data" (OuterVolumeSpecName: "config-data") pod "73ec06a5-eadd-4545-a157-1aa731eabe13" (UID: "73ec06a5-eadd-4545-a157-1aa731eabe13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.832434 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.832470 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqvmj\" (UniqueName: \"kubernetes.io/projected/73ec06a5-eadd-4545-a157-1aa731eabe13-kube-api-access-bqvmj\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.832483 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.832492 5065 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.832501 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:31 crc kubenswrapper[5065]: I1008 13:40:31.832510 5065 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73ec06a5-eadd-4545-a157-1aa731eabe13-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.579810 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73ec06a5-eadd-4545-a157-1aa731eabe13","Type":"ContainerDied","Data":"e7653a79d2d3c1219cf05c177f1e8455487bfe3bbca2af0b4ad223d021e927a9"} Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.579893 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.579912 5065 scope.go:117] "RemoveContainer" containerID="5064c6efe7bd2fd17eb7c7a569db4fa8aa930bcc4f32d0c03a00b49f045cd8eb" Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.618331 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.624594 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.625130 5065 scope.go:117] "RemoveContainer" containerID="05fb2e1abbe2ed373a22504cfa769a58bda357d67045a5a667156055922438b4" Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.646450 5065 scope.go:117] "RemoveContainer" containerID="b435edf6aa605158448d0a1d1edd6a7ab24f12bb0cc39517202571a1d63578f3" Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.663732 5065 scope.go:117] "RemoveContainer" containerID="cb47b71cac0e504aa65a98ca53c8d8abc53a58802c1dd955a2eb6fd0b50a5da8" Oct 08 13:40:32 crc kubenswrapper[5065]: I1008 13:40:32.888213 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" path="/var/lib/kubelet/pods/73ec06a5-eadd-4545-a157-1aa731eabe13/volumes" Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.878436 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.881138 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.882021 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.882086 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.882169 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.883743 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.885095 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:33 crc kubenswrapper[5065]: E1008 13:40:33.885133 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:40:36 crc kubenswrapper[5065]: I1008 13:40:36.431632 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5f88c4599-sd7mw" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.601922 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.654837 5065 generic.go:334] "Generic (PLEG): container finished" podID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerID="12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8" exitCode=0 Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.654895 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f88c4599-sd7mw" event={"ID":"fd3f72f8-a569-409f-a590-02a0f7fcdc81","Type":"ContainerDied","Data":"12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8"} Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.654927 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f88c4599-sd7mw" event={"ID":"fd3f72f8-a569-409f-a590-02a0f7fcdc81","Type":"ContainerDied","Data":"f14c9aa44d8493f186222b9073cb137ce22c9868d2dcd170fb579a3f5108884e"} Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.654945 5065 scope.go:117] "RemoveContainer" containerID="82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.654983 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f88c4599-sd7mw" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.676893 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-public-tls-certs\") pod \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.676988 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-ovndb-tls-certs\") pod \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.677032 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-config\") pod \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.677053 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-internal-tls-certs\") pod \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.677074 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-httpd-config\") pod \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.677126 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-combined-ca-bundle\") pod \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.677168 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/fd3f72f8-a569-409f-a590-02a0f7fcdc81-kube-api-access-hm6hf\") pod \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\" (UID: \"fd3f72f8-a569-409f-a590-02a0f7fcdc81\") " Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.684857 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd3f72f8-a569-409f-a590-02a0f7fcdc81-kube-api-access-hm6hf" (OuterVolumeSpecName: "kube-api-access-hm6hf") pod "fd3f72f8-a569-409f-a590-02a0f7fcdc81" (UID: "fd3f72f8-a569-409f-a590-02a0f7fcdc81"). InnerVolumeSpecName "kube-api-access-hm6hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.685760 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fd3f72f8-a569-409f-a590-02a0f7fcdc81" (UID: "fd3f72f8-a569-409f-a590-02a0f7fcdc81"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.687893 5065 scope.go:117] "RemoveContainer" containerID="12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.723946 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd3f72f8-a569-409f-a590-02a0f7fcdc81" (UID: "fd3f72f8-a569-409f-a590-02a0f7fcdc81"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.725211 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fd3f72f8-a569-409f-a590-02a0f7fcdc81" (UID: "fd3f72f8-a569-409f-a590-02a0f7fcdc81"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.726207 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd3f72f8-a569-409f-a590-02a0f7fcdc81" (UID: "fd3f72f8-a569-409f-a590-02a0f7fcdc81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.727224 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-config" (OuterVolumeSpecName: "config") pod "fd3f72f8-a569-409f-a590-02a0f7fcdc81" (UID: "fd3f72f8-a569-409f-a590-02a0f7fcdc81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.752927 5065 scope.go:117] "RemoveContainer" containerID="82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.752938 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fd3f72f8-a569-409f-a590-02a0f7fcdc81" (UID: "fd3f72f8-a569-409f-a590-02a0f7fcdc81"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:37 crc kubenswrapper[5065]: E1008 13:40:37.754690 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821\": container with ID starting with 82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821 not found: ID does not exist" containerID="82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.754726 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821"} err="failed to get container status \"82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821\": rpc error: code = NotFound desc = could not find container \"82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821\": container with ID starting with 82d033cc034d74a52247d1b9862682e1e567a8d524767ccde6e07f6e29b4e821 not found: ID does not exist" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.754750 5065 scope.go:117] "RemoveContainer" containerID="12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8" Oct 08 13:40:37 crc kubenswrapper[5065]: E1008 13:40:37.755082 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8\": container with ID starting with 12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8 not found: ID does not exist" containerID="12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.755117 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8"} err="failed to get container status \"12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8\": rpc error: code = NotFound desc = could not find container \"12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8\": container with ID starting with 12a6dc5f131bbdcb6ea1fc707c13a16e4fc8d0b823a7816221475889271b7ff8 not found: ID does not exist" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.779046 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/fd3f72f8-a569-409f-a590-02a0f7fcdc81-kube-api-access-hm6hf\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.779092 5065 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.779111 5065 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.779128 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.779145 5065 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.779160 5065 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.779175 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3f72f8-a569-409f-a590-02a0f7fcdc81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.985594 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f88c4599-sd7mw"] Oct 08 13:40:37 crc kubenswrapper[5065]: I1008 13:40:37.989629 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f88c4599-sd7mw"] Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.881175 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.881661 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.882058 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.882088 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.882115 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.883782 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.886647 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:38 crc kubenswrapper[5065]: E1008 13:40:38.886716 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:40:38 crc kubenswrapper[5065]: I1008 13:40:38.890138 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" path="/var/lib/kubelet/pods/fd3f72f8-a569-409f-a590-02a0f7fcdc81/volumes" Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.878625 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.879616 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.880100 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.880149 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.881471 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.882886 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.884487 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:43 crc kubenswrapper[5065]: E1008 13:40:43.884559 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.879291 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.880073 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.880296 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.880331 5065 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.880860 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.882965 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.884932 5065 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 13:40:48 crc kubenswrapper[5065]: E1008 13:40:48.885108 5065 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-f9wxn" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:40:50 crc kubenswrapper[5065]: I1008 13:40:50.782701 5065 generic.go:334] "Generic (PLEG): container finished" podID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerID="031335faa75843deb180f0a407142ad9a9127dae436f5b2f0f90352f75ef55b1" exitCode=137 Oct 08 13:40:50 crc kubenswrapper[5065]: I1008 13:40:50.782743 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d473a1f-35dc-4b20-a344-19c23f1c8c06","Type":"ContainerDied","Data":"031335faa75843deb180f0a407142ad9a9127dae436f5b2f0f90352f75ef55b1"} Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.184535 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.294216 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-scripts\") pod \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.294314 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2grk9\" (UniqueName: \"kubernetes.io/projected/0d473a1f-35dc-4b20-a344-19c23f1c8c06-kube-api-access-2grk9\") pod \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.294349 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data-custom\") pod \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.294453 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-combined-ca-bundle\") pod \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.294561 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d473a1f-35dc-4b20-a344-19c23f1c8c06-etc-machine-id\") pod \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.294604 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data\") pod \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\" (UID: \"0d473a1f-35dc-4b20-a344-19c23f1c8c06\") " Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.295228 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d473a1f-35dc-4b20-a344-19c23f1c8c06-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d473a1f-35dc-4b20-a344-19c23f1c8c06" (UID: "0d473a1f-35dc-4b20-a344-19c23f1c8c06"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.299884 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-scripts" (OuterVolumeSpecName: "scripts") pod "0d473a1f-35dc-4b20-a344-19c23f1c8c06" (UID: "0d473a1f-35dc-4b20-a344-19c23f1c8c06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.300162 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d473a1f-35dc-4b20-a344-19c23f1c8c06" (UID: "0d473a1f-35dc-4b20-a344-19c23f1c8c06"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.301504 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d473a1f-35dc-4b20-a344-19c23f1c8c06-kube-api-access-2grk9" (OuterVolumeSpecName: "kube-api-access-2grk9") pod "0d473a1f-35dc-4b20-a344-19c23f1c8c06" (UID: "0d473a1f-35dc-4b20-a344-19c23f1c8c06"). InnerVolumeSpecName "kube-api-access-2grk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.338861 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d473a1f-35dc-4b20-a344-19c23f1c8c06" (UID: "0d473a1f-35dc-4b20-a344-19c23f1c8c06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.389566 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data" (OuterVolumeSpecName: "config-data") pod "0d473a1f-35dc-4b20-a344-19c23f1c8c06" (UID: "0d473a1f-35dc-4b20-a344-19c23f1c8c06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.396106 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.396132 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.396142 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2grk9\" (UniqueName: \"kubernetes.io/projected/0d473a1f-35dc-4b20-a344-19c23f1c8c06-kube-api-access-2grk9\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.396153 5065 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.396163 5065 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d473a1f-35dc-4b20-a344-19c23f1c8c06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.396171 5065 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d473a1f-35dc-4b20-a344-19c23f1c8c06-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.802360 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d473a1f-35dc-4b20-a344-19c23f1c8c06","Type":"ContainerDied","Data":"22b3afb5168cedc8711dd30cf6488d464901864b51d4792cb1a58f1c8ddeafc2"} Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.802651 5065 scope.go:117] "RemoveContainer" containerID="2f9df638248c0e2761238e4dd3780cc9ae247b9e55d3548a3bb207837154cf62" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.802386 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.806061 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f9wxn_f523d852-2e73-4168-b3ca-af18fa28cc07/ovs-vswitchd/0.log" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.815034 5065 generic.go:334] "Generic (PLEG): container finished" podID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" exitCode=137 Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.815083 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerDied","Data":"5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee"} Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.830786 5065 scope.go:117] "RemoveContainer" containerID="031335faa75843deb180f0a407142ad9a9127dae436f5b2f0f90352f75ef55b1" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.836716 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.841919 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.849509 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f9wxn_f523d852-2e73-4168-b3ca-af18fa28cc07/ovs-vswitchd/0.log" Oct 08 13:40:51 crc kubenswrapper[5065]: I1008 13:40:51.851481 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.003828 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f523d852-2e73-4168-b3ca-af18fa28cc07-scripts\") pod \"f523d852-2e73-4168-b3ca-af18fa28cc07\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.003910 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjkg6\" (UniqueName: \"kubernetes.io/projected/f523d852-2e73-4168-b3ca-af18fa28cc07-kube-api-access-xjkg6\") pod \"f523d852-2e73-4168-b3ca-af18fa28cc07\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.003948 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-etc-ovs\") pod \"f523d852-2e73-4168-b3ca-af18fa28cc07\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.003979 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-lib\") pod \"f523d852-2e73-4168-b3ca-af18fa28cc07\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004032 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-run\") pod \"f523d852-2e73-4168-b3ca-af18fa28cc07\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004090 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-log\") pod \"f523d852-2e73-4168-b3ca-af18fa28cc07\" (UID: \"f523d852-2e73-4168-b3ca-af18fa28cc07\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004136 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "f523d852-2e73-4168-b3ca-af18fa28cc07" (UID: "f523d852-2e73-4168-b3ca-af18fa28cc07"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004188 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-lib" (OuterVolumeSpecName: "var-lib") pod "f523d852-2e73-4168-b3ca-af18fa28cc07" (UID: "f523d852-2e73-4168-b3ca-af18fa28cc07"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004224 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-run" (OuterVolumeSpecName: "var-run") pod "f523d852-2e73-4168-b3ca-af18fa28cc07" (UID: "f523d852-2e73-4168-b3ca-af18fa28cc07"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004321 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-log" (OuterVolumeSpecName: "var-log") pod "f523d852-2e73-4168-b3ca-af18fa28cc07" (UID: "f523d852-2e73-4168-b3ca-af18fa28cc07"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004535 5065 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-log\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004555 5065 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-etc-ovs\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004569 5065 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-lib\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.004579 5065 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f523d852-2e73-4168-b3ca-af18fa28cc07-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.005275 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f523d852-2e73-4168-b3ca-af18fa28cc07-scripts" (OuterVolumeSpecName: "scripts") pod "f523d852-2e73-4168-b3ca-af18fa28cc07" (UID: "f523d852-2e73-4168-b3ca-af18fa28cc07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.007734 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f523d852-2e73-4168-b3ca-af18fa28cc07-kube-api-access-xjkg6" (OuterVolumeSpecName: "kube-api-access-xjkg6") pod "f523d852-2e73-4168-b3ca-af18fa28cc07" (UID: "f523d852-2e73-4168-b3ca-af18fa28cc07"). InnerVolumeSpecName "kube-api-access-xjkg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.105600 5065 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f523d852-2e73-4168-b3ca-af18fa28cc07-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.105638 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjkg6\" (UniqueName: \"kubernetes.io/projected/f523d852-2e73-4168-b3ca-af18fa28cc07-kube-api-access-xjkg6\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.401794 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.509891 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfhff\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-kube-api-access-vfhff\") pod \"19063d41-be34-463b-8bb7-d45f7d804602\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.509951 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") pod \"19063d41-be34-463b-8bb7-d45f7d804602\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.509977 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-lock\") pod \"19063d41-be34-463b-8bb7-d45f7d804602\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.510008 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-cache\") pod \"19063d41-be34-463b-8bb7-d45f7d804602\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.510042 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"19063d41-be34-463b-8bb7-d45f7d804602\" (UID: \"19063d41-be34-463b-8bb7-d45f7d804602\") " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.510545 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-lock" (OuterVolumeSpecName: "lock") pod "19063d41-be34-463b-8bb7-d45f7d804602" (UID: "19063d41-be34-463b-8bb7-d45f7d804602"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.510711 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-cache" (OuterVolumeSpecName: "cache") pod "19063d41-be34-463b-8bb7-d45f7d804602" (UID: "19063d41-be34-463b-8bb7-d45f7d804602"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.511137 5065 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-lock\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.511188 5065 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19063d41-be34-463b-8bb7-d45f7d804602-cache\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.513748 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "swift") pod "19063d41-be34-463b-8bb7-d45f7d804602" (UID: "19063d41-be34-463b-8bb7-d45f7d804602"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.514621 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "19063d41-be34-463b-8bb7-d45f7d804602" (UID: "19063d41-be34-463b-8bb7-d45f7d804602"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.515431 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-kube-api-access-vfhff" (OuterVolumeSpecName: "kube-api-access-vfhff") pod "19063d41-be34-463b-8bb7-d45f7d804602" (UID: "19063d41-be34-463b-8bb7-d45f7d804602"). InnerVolumeSpecName "kube-api-access-vfhff". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.612527 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.612607 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfhff\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-kube-api-access-vfhff\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.612637 5065 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19063d41-be34-463b-8bb7-d45f7d804602-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.633733 5065 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.714279 5065 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.827961 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f9wxn_f523d852-2e73-4168-b3ca-af18fa28cc07/ovs-vswitchd/0.log" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.828685 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f9wxn" event={"ID":"f523d852-2e73-4168-b3ca-af18fa28cc07","Type":"ContainerDied","Data":"7fb519e84f26876238729134c00763e1fe3488e3657accc660e01f25a7662e66"} Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.828715 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f9wxn" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.828726 5065 scope.go:117] "RemoveContainer" containerID="5a321fa3c534b03a79e075037211f2d4274c3933f1fedd426ceed76fef0e43ee" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.838740 5065 generic.go:334] "Generic (PLEG): container finished" podID="19063d41-be34-463b-8bb7-d45f7d804602" containerID="ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743" exitCode=137 Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.838786 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743"} Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.838840 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19063d41-be34-463b-8bb7-d45f7d804602","Type":"ContainerDied","Data":"964753e35406738095c8766f59eb685720ecd5328d52e95ead61b58ba12eff82"} Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.838889 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.864151 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-f9wxn"] Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.867993 5065 scope.go:117] "RemoveContainer" containerID="a562038ef6f5f29202df24aba54b60cd58e62ad3977c8bdd1699c2e29e607ddf" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.870018 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-f9wxn"] Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.887542 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" path="/var/lib/kubelet/pods/0d473a1f-35dc-4b20-a344-19c23f1c8c06/volumes" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.888097 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" path="/var/lib/kubelet/pods/f523d852-2e73-4168-b3ca-af18fa28cc07/volumes" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.888670 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.893440 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.906755 5065 scope.go:117] "RemoveContainer" containerID="504cb4d2f0d2b0818331cd6d07891089513ae3e6588d657954b8285ef3cba2aa" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.936647 5065 scope.go:117] "RemoveContainer" containerID="ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.960601 5065 scope.go:117] "RemoveContainer" containerID="361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95" Oct 08 13:40:52 crc kubenswrapper[5065]: I1008 13:40:52.993473 5065 scope.go:117] "RemoveContainer" containerID="4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.017923 5065 scope.go:117] "RemoveContainer" containerID="6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.042671 5065 scope.go:117] "RemoveContainer" containerID="9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.075369 5065 scope.go:117] "RemoveContainer" containerID="7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.098764 5065 scope.go:117] "RemoveContainer" containerID="4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.123267 5065 scope.go:117] "RemoveContainer" containerID="bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.143008 5065 scope.go:117] "RemoveContainer" containerID="20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.160658 5065 scope.go:117] "RemoveContainer" containerID="6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.176944 5065 scope.go:117] "RemoveContainer" containerID="64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.200687 5065 scope.go:117] "RemoveContainer" containerID="00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.224759 5065 scope.go:117] "RemoveContainer" containerID="f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.247135 5065 scope.go:117] "RemoveContainer" containerID="c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.264874 5065 scope.go:117] "RemoveContainer" containerID="447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.290013 5065 scope.go:117] "RemoveContainer" containerID="ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.290657 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743\": container with ID starting with ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743 not found: ID does not exist" containerID="ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.290700 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743"} err="failed to get container status \"ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743\": rpc error: code = NotFound desc = could not find container \"ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743\": container with ID starting with ee8180b4debc8a8a65ab9f256776c79126427b029cc1b89fe8e4f39cd79c3743 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.290726 5065 scope.go:117] "RemoveContainer" containerID="361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.291261 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95\": container with ID starting with 361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95 not found: ID does not exist" containerID="361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.291312 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95"} err="failed to get container status \"361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95\": rpc error: code = NotFound desc = could not find container \"361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95\": container with ID starting with 361bbf3967cbd94d97d58e01749266489ce91e87fdaebf76c4503f54283e2a95 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.291345 5065 scope.go:117] "RemoveContainer" containerID="4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.291808 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872\": container with ID starting with 4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872 not found: ID does not exist" containerID="4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.291858 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872"} err="failed to get container status \"4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872\": rpc error: code = NotFound desc = could not find container \"4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872\": container with ID starting with 4bfd68b6c2e297a3e83b008f64bf4eb401d26a1425b3db49bc21146e3d7c7872 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.291900 5065 scope.go:117] "RemoveContainer" containerID="6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.292274 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d\": container with ID starting with 6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d not found: ID does not exist" containerID="6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.292299 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d"} err="failed to get container status \"6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d\": rpc error: code = NotFound desc = could not find container \"6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d\": container with ID starting with 6c09e71db7522c3e069f3e338197a1e134c04222ec8bac9d76d05c50c19f239d not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.292315 5065 scope.go:117] "RemoveContainer" containerID="9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.292598 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b\": container with ID starting with 9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b not found: ID does not exist" containerID="9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.292640 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b"} err="failed to get container status \"9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b\": rpc error: code = NotFound desc = could not find container \"9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b\": container with ID starting with 9c7224674d840915450cc8c6e25639a7e80b95b6957004f6de0f29bf0fb91d8b not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.292669 5065 scope.go:117] "RemoveContainer" containerID="7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.293013 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c\": container with ID starting with 7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c not found: ID does not exist" containerID="7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.293041 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c"} err="failed to get container status \"7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c\": rpc error: code = NotFound desc = could not find container \"7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c\": container with ID starting with 7c64b1351e83cde0e80c1d7fe3ad4b7e16de2a070b15742831cbf0394985803c not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.293057 5065 scope.go:117] "RemoveContainer" containerID="4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.293448 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586\": container with ID starting with 4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586 not found: ID does not exist" containerID="4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.293537 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586"} err="failed to get container status \"4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586\": rpc error: code = NotFound desc = could not find container \"4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586\": container with ID starting with 4af6a29227998441f27fb03057c52c8260bc95198e0a44323b807dcb4528d586 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.293594 5065 scope.go:117] "RemoveContainer" containerID="bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.294090 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514\": container with ID starting with bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514 not found: ID does not exist" containerID="bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.294119 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514"} err="failed to get container status \"bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514\": rpc error: code = NotFound desc = could not find container \"bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514\": container with ID starting with bb695ac650185f4333ac64a02ac24fd255826948ebc5681b92747b3e86469514 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.294137 5065 scope.go:117] "RemoveContainer" containerID="20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.294597 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732\": container with ID starting with 20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732 not found: ID does not exist" containerID="20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.294679 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732"} err="failed to get container status \"20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732\": rpc error: code = NotFound desc = could not find container \"20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732\": container with ID starting with 20d9ec411562f51e0685da916aad11e10d63910e27f84036d46574483c293732 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.294710 5065 scope.go:117] "RemoveContainer" containerID="6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.295071 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432\": container with ID starting with 6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432 not found: ID does not exist" containerID="6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.295111 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432"} err="failed to get container status \"6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432\": rpc error: code = NotFound desc = could not find container \"6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432\": container with ID starting with 6042f285dd28c2e556762f729925d333ecdd7101fd2afd3449915404051b7432 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.295135 5065 scope.go:117] "RemoveContainer" containerID="64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.295491 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834\": container with ID starting with 64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834 not found: ID does not exist" containerID="64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.295523 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834"} err="failed to get container status \"64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834\": rpc error: code = NotFound desc = could not find container \"64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834\": container with ID starting with 64b194891a57866256321c60963b1a55abe5ea9137d96d8eea3238c821373834 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.295543 5065 scope.go:117] "RemoveContainer" containerID="00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.295828 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49\": container with ID starting with 00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49 not found: ID does not exist" containerID="00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.295869 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49"} err="failed to get container status \"00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49\": rpc error: code = NotFound desc = could not find container \"00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49\": container with ID starting with 00ff24aff045fd814a16790f59f062842fc0ff9e0601f5b391a537de4dcebe49 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.295896 5065 scope.go:117] "RemoveContainer" containerID="f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.296395 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9\": container with ID starting with f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9 not found: ID does not exist" containerID="f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.296549 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9"} err="failed to get container status \"f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9\": rpc error: code = NotFound desc = could not find container \"f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9\": container with ID starting with f0e5c35dcc9f669808490b8ddada56ac33644db4f172f75ea9b908094bd89de9 not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.296583 5065 scope.go:117] "RemoveContainer" containerID="c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.296931 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea\": container with ID starting with c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea not found: ID does not exist" containerID="c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.297062 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea"} err="failed to get container status \"c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea\": rpc error: code = NotFound desc = could not find container \"c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea\": container with ID starting with c7e5718722f1f0cc720ba03200b5072c4500b734f459734ed12c0f62b2f3f7ea not found: ID does not exist" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.297098 5065 scope.go:117] "RemoveContainer" containerID="447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f" Oct 08 13:40:53 crc kubenswrapper[5065]: E1008 13:40:53.297928 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f\": container with ID starting with 447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f not found: ID does not exist" containerID="447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f" Oct 08 13:40:53 crc kubenswrapper[5065]: I1008 13:40:53.297985 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f"} err="failed to get container status \"447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f\": rpc error: code = NotFound desc = could not find container \"447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f\": container with ID starting with 447f9192660f0cd3d7854a61bd51664ab2997e184354155cd12971c4a4b3c37f not found: ID does not exist" Oct 08 13:40:54 crc kubenswrapper[5065]: I1008 13:40:54.886194 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19063d41-be34-463b-8bb7-d45f7d804602" path="/var/lib/kubelet/pods/19063d41-be34-463b-8bb7-d45f7d804602/volumes" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.670691 5065 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6ce5c750-265a-4589-8f5e-a9e6a846d0d0"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6ce5c750-265a-4589-8f5e-a9e6a846d0d0] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6ce5c750_265a_4589_8f5e_a9e6a846d0d0.slice" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.787956 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5412-account-delete-fh88w" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.831685 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi6610-account-delete-vhpbl" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.837526 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronca6e-account-delete-g4lj7" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.842863 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0be58-account-delete-r2zk9" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.889457 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k62xc\" (UniqueName: \"kubernetes.io/projected/36ed295f-7baa-466e-8a26-6d923a84d1b5-kube-api-access-k62xc\") pod \"36ed295f-7baa-466e-8a26-6d923a84d1b5\" (UID: \"36ed295f-7baa-466e-8a26-6d923a84d1b5\") " Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.895245 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ed295f-7baa-466e-8a26-6d923a84d1b5-kube-api-access-k62xc" (OuterVolumeSpecName: "kube-api-access-k62xc") pod "36ed295f-7baa-466e-8a26-6d923a84d1b5" (UID: "36ed295f-7baa-466e-8a26-6d923a84d1b5"). InnerVolumeSpecName "kube-api-access-k62xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.896000 5065 generic.go:334] "Generic (PLEG): container finished" podID="b6712e27-2a2f-43a8-8c79-dd7b5090d987" containerID="4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553" exitCode=137 Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.896077 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi6610-account-delete-vhpbl" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.896106 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi6610-account-delete-vhpbl" event={"ID":"b6712e27-2a2f-43a8-8c79-dd7b5090d987","Type":"ContainerDied","Data":"4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.896259 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi6610-account-delete-vhpbl" event={"ID":"b6712e27-2a2f-43a8-8c79-dd7b5090d987","Type":"ContainerDied","Data":"42c69cdded70cc31b909d2acc262f84d04a4d3b107b46480f992a616d4dafd0a"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.896320 5065 scope.go:117] "RemoveContainer" containerID="4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.904821 5065 generic.go:334] "Generic (PLEG): container finished" podID="80b97e55-65fa-4e4a-becd-d13dd95bb78a" containerID="8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113" exitCode=137 Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.904888 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronca6e-account-delete-g4lj7" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.904895 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronca6e-account-delete-g4lj7" event={"ID":"80b97e55-65fa-4e4a-becd-d13dd95bb78a","Type":"ContainerDied","Data":"8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.904947 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronca6e-account-delete-g4lj7" event={"ID":"80b97e55-65fa-4e4a-becd-d13dd95bb78a","Type":"ContainerDied","Data":"2b68bf33ef924c8c0c4198eb00fa5f8934707e71959dcd5d9fc3a01bf5cc7447"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.922786 5065 generic.go:334] "Generic (PLEG): container finished" podID="1582f178-44ee-4e28-a6d6-1d6a29050b56" containerID="ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61" exitCode=137 Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.922868 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0be58-account-delete-r2zk9" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.922865 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0be58-account-delete-r2zk9" event={"ID":"1582f178-44ee-4e28-a6d6-1d6a29050b56","Type":"ContainerDied","Data":"ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.922940 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0be58-account-delete-r2zk9" event={"ID":"1582f178-44ee-4e28-a6d6-1d6a29050b56","Type":"ContainerDied","Data":"79cbcb9bd528d9748b0a6cdeb3101a4da236964b45d36cba0123973fd33dcbd5"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.925176 5065 generic.go:334] "Generic (PLEG): container finished" podID="f60057cf-9f14-4fbd-b161-e27abcc9c7a5" containerID="3c36edfa67f7001b81b7944bb3c24a3e22efa071127d19f3ba64444763136734" exitCode=137 Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.925250 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8439-account-delete-zlfcc" event={"ID":"f60057cf-9f14-4fbd-b161-e27abcc9c7a5","Type":"ContainerDied","Data":"3c36edfa67f7001b81b7944bb3c24a3e22efa071127d19f3ba64444763136734"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.930748 5065 scope.go:117] "RemoveContainer" containerID="4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553" Oct 08 13:40:57 crc kubenswrapper[5065]: E1008 13:40:57.931583 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553\": container with ID starting with 4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553 not found: ID does not exist" containerID="4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.931623 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553"} err="failed to get container status \"4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553\": rpc error: code = NotFound desc = could not find container \"4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553\": container with ID starting with 4dc7797503fbbdfdc4a93c8aea5b312f41d09c6da1bba330a66bdbab12d2a553 not found: ID does not exist" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.931646 5065 scope.go:117] "RemoveContainer" containerID="8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.932122 5065 generic.go:334] "Generic (PLEG): container finished" podID="36ed295f-7baa-466e-8a26-6d923a84d1b5" containerID="6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53" exitCode=137 Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.932162 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5412-account-delete-fh88w" event={"ID":"36ed295f-7baa-466e-8a26-6d923a84d1b5","Type":"ContainerDied","Data":"6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.932190 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5412-account-delete-fh88w" event={"ID":"36ed295f-7baa-466e-8a26-6d923a84d1b5","Type":"ContainerDied","Data":"514f59d9895dc49f96fa16b11f9b3fe76b95bd05f56222aac0f68c052d761558"} Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.932377 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5412-account-delete-fh88w" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.954978 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance8439-account-delete-zlfcc" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.958235 5065 scope.go:117] "RemoveContainer" containerID="8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113" Oct 08 13:40:57 crc kubenswrapper[5065]: E1008 13:40:57.958791 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113\": container with ID starting with 8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113 not found: ID does not exist" containerID="8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.958828 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113"} err="failed to get container status \"8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113\": rpc error: code = NotFound desc = could not find container \"8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113\": container with ID starting with 8d0d43538e67d7cd39237b73d22a8441c267ee322f53837ffdb4cfcd94000113 not found: ID does not exist" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.958851 5065 scope.go:117] "RemoveContainer" containerID="ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.969509 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican5412-account-delete-fh88w"] Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.980701 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican5412-account-delete-fh88w"] Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.984991 5065 scope.go:117] "RemoveContainer" containerID="ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61" Oct 08 13:40:57 crc kubenswrapper[5065]: E1008 13:40:57.985407 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61\": container with ID starting with ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61 not found: ID does not exist" containerID="ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.985458 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61"} err="failed to get container status \"ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61\": rpc error: code = NotFound desc = could not find container \"ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61\": container with ID starting with ca50e54a3f74190dd8739f07cebb4c9785042034c2060fd50989b0b507f76f61 not found: ID does not exist" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.985485 5065 scope.go:117] "RemoveContainer" containerID="6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.990266 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frmmv\" (UniqueName: \"kubernetes.io/projected/1582f178-44ee-4e28-a6d6-1d6a29050b56-kube-api-access-frmmv\") pod \"1582f178-44ee-4e28-a6d6-1d6a29050b56\" (UID: \"1582f178-44ee-4e28-a6d6-1d6a29050b56\") " Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.990351 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np2gj\" (UniqueName: \"kubernetes.io/projected/80b97e55-65fa-4e4a-becd-d13dd95bb78a-kube-api-access-np2gj\") pod \"80b97e55-65fa-4e4a-becd-d13dd95bb78a\" (UID: \"80b97e55-65fa-4e4a-becd-d13dd95bb78a\") " Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.990441 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl9mw\" (UniqueName: \"kubernetes.io/projected/b6712e27-2a2f-43a8-8c79-dd7b5090d987-kube-api-access-jl9mw\") pod \"b6712e27-2a2f-43a8-8c79-dd7b5090d987\" (UID: \"b6712e27-2a2f-43a8-8c79-dd7b5090d987\") " Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.990885 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k62xc\" (UniqueName: \"kubernetes.io/projected/36ed295f-7baa-466e-8a26-6d923a84d1b5-kube-api-access-k62xc\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.993566 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1582f178-44ee-4e28-a6d6-1d6a29050b56-kube-api-access-frmmv" (OuterVolumeSpecName: "kube-api-access-frmmv") pod "1582f178-44ee-4e28-a6d6-1d6a29050b56" (UID: "1582f178-44ee-4e28-a6d6-1d6a29050b56"). InnerVolumeSpecName "kube-api-access-frmmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.993624 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b97e55-65fa-4e4a-becd-d13dd95bb78a-kube-api-access-np2gj" (OuterVolumeSpecName: "kube-api-access-np2gj") pod "80b97e55-65fa-4e4a-becd-d13dd95bb78a" (UID: "80b97e55-65fa-4e4a-becd-d13dd95bb78a"). InnerVolumeSpecName "kube-api-access-np2gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:57 crc kubenswrapper[5065]: I1008 13:40:57.994111 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6712e27-2a2f-43a8-8c79-dd7b5090d987-kube-api-access-jl9mw" (OuterVolumeSpecName: "kube-api-access-jl9mw") pod "b6712e27-2a2f-43a8-8c79-dd7b5090d987" (UID: "b6712e27-2a2f-43a8-8c79-dd7b5090d987"). InnerVolumeSpecName "kube-api-access-jl9mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.007778 5065 scope.go:117] "RemoveContainer" containerID="6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53" Oct 08 13:40:58 crc kubenswrapper[5065]: E1008 13:40:58.008258 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53\": container with ID starting with 6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53 not found: ID does not exist" containerID="6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.008289 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53"} err="failed to get container status \"6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53\": rpc error: code = NotFound desc = could not find container \"6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53\": container with ID starting with 6c2a06fce67fdc46683435adda6fd6f000c42faa508d2e8e629b9612831f3a53 not found: ID does not exist" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.092348 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c28d\" (UniqueName: \"kubernetes.io/projected/f60057cf-9f14-4fbd-b161-e27abcc9c7a5-kube-api-access-6c28d\") pod \"f60057cf-9f14-4fbd-b161-e27abcc9c7a5\" (UID: \"f60057cf-9f14-4fbd-b161-e27abcc9c7a5\") " Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.092853 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frmmv\" (UniqueName: \"kubernetes.io/projected/1582f178-44ee-4e28-a6d6-1d6a29050b56-kube-api-access-frmmv\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.092874 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np2gj\" (UniqueName: \"kubernetes.io/projected/80b97e55-65fa-4e4a-becd-d13dd95bb78a-kube-api-access-np2gj\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.092887 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl9mw\" (UniqueName: \"kubernetes.io/projected/b6712e27-2a2f-43a8-8c79-dd7b5090d987-kube-api-access-jl9mw\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.109646 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60057cf-9f14-4fbd-b161-e27abcc9c7a5-kube-api-access-6c28d" (OuterVolumeSpecName: "kube-api-access-6c28d") pod "f60057cf-9f14-4fbd-b161-e27abcc9c7a5" (UID: "f60057cf-9f14-4fbd-b161-e27abcc9c7a5"). InnerVolumeSpecName "kube-api-access-6c28d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.194638 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c28d\" (UniqueName: \"kubernetes.io/projected/f60057cf-9f14-4fbd-b161-e27abcc9c7a5-kube-api-access-6c28d\") on node \"crc\" DevicePath \"\"" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.225019 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi6610-account-delete-vhpbl"] Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.232008 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi6610-account-delete-vhpbl"] Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.241622 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronca6e-account-delete-g4lj7"] Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.252866 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutronca6e-account-delete-g4lj7"] Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.257972 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0be58-account-delete-r2zk9"] Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.262537 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell0be58-account-delete-r2zk9"] Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.891471 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1582f178-44ee-4e28-a6d6-1d6a29050b56" path="/var/lib/kubelet/pods/1582f178-44ee-4e28-a6d6-1d6a29050b56/volumes" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.892133 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ed295f-7baa-466e-8a26-6d923a84d1b5" path="/var/lib/kubelet/pods/36ed295f-7baa-466e-8a26-6d923a84d1b5/volumes" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.892602 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b97e55-65fa-4e4a-becd-d13dd95bb78a" path="/var/lib/kubelet/pods/80b97e55-65fa-4e4a-becd-d13dd95bb78a/volumes" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.892994 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6712e27-2a2f-43a8-8c79-dd7b5090d987" path="/var/lib/kubelet/pods/b6712e27-2a2f-43a8-8c79-dd7b5090d987/volumes" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.940663 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance8439-account-delete-zlfcc" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.940643 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8439-account-delete-zlfcc" event={"ID":"f60057cf-9f14-4fbd-b161-e27abcc9c7a5","Type":"ContainerDied","Data":"39e03284e714bfc16de7dd5386b607696858b20426ed3a90b5267b3e10051e33"} Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.940802 5065 scope.go:117] "RemoveContainer" containerID="3c36edfa67f7001b81b7944bb3c24a3e22efa071127d19f3ba64444763136734" Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.964888 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance8439-account-delete-zlfcc"] Oct 08 13:40:58 crc kubenswrapper[5065]: I1008 13:40:58.970237 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance8439-account-delete-zlfcc"] Oct 08 13:41:00 crc kubenswrapper[5065]: I1008 13:41:00.885255 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60057cf-9f14-4fbd-b161-e27abcc9c7a5" path="/var/lib/kubelet/pods/f60057cf-9f14-4fbd-b161-e27abcc9c7a5/volumes" Oct 08 13:41:25 crc kubenswrapper[5065]: I1008 13:41:25.479789 5065 scope.go:117] "RemoveContainer" containerID="960d5d7c46de5d8c549027a43b4c38ffc8152b31965cc6a2df89d95dbd1e8480" Oct 08 13:41:25 crc kubenswrapper[5065]: I1008 13:41:25.517482 5065 scope.go:117] "RemoveContainer" containerID="a44b760b0eeef2da2d46263b1c69d9a3f20ef2196ecd4ad96ea02f39ea7d5e50" Oct 08 13:41:25 crc kubenswrapper[5065]: I1008 13:41:25.557926 5065 scope.go:117] "RemoveContainer" containerID="3dd840b2a1968cb45aa4333789027815be04db1faa5fe300d7fbe1813965b970" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.928694 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xp89v"] Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.930191 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6470de54-fdec-4648-b941-1031c67f55ca" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.930217 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="6470de54-fdec-4648-b941-1031c67f55ca" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.930238 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerName="setup-container" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.930251 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerName="setup-container" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.930303 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-notification-agent" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.930317 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-notification-agent" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.930333 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server-init" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.930356 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server-init" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931086 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60057cf-9f14-4fbd-b161-e27abcc9c7a5" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931134 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60057cf-9f14-4fbd-b161-e27abcc9c7a5" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931176 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerName="rabbitmq" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931190 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerName="rabbitmq" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931203 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931222 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931236 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931249 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931274 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerName="mysql-bootstrap" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931285 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerName="mysql-bootstrap" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931309 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931320 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931376 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4749b7e4-3896-474d-84b3-8ddf351a24ac" containerName="ovn-controller" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931387 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4749b7e4-3896-474d-84b3-8ddf351a24ac" containerName="ovn-controller" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931401 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eba221c-653d-434a-a486-16be41c4a5c4" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931439 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eba221c-653d-434a-a486-16be41c4a5c4" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931454 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931465 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931505 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931516 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931535 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="cinder-scheduler" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931548 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="cinder-scheduler" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931576 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1582f178-44ee-4e28-a6d6-1d6a29050b56" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931586 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1582f178-44ee-4e28-a6d6-1d6a29050b56" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931621 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931632 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-api" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931647 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b97e55-65fa-4e4a-becd-d13dd95bb78a" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931657 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b97e55-65fa-4e4a-becd-d13dd95bb78a" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931692 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931703 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931723 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931735 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931769 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931782 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-api" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931797 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d28af9-b1bc-4475-abc6-9c33380349e9" containerName="nova-cell0-conductor-conductor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931808 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d28af9-b1bc-4475-abc6-9c33380349e9" containerName="nova-cell0-conductor-conductor" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931830 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-reaper" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931840 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-reaper" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931865 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050c0e99-7984-43be-8701-84602f0c9294" containerName="mysql-bootstrap" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931877 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="050c0e99-7984-43be-8701-84602f0c9294" containerName="mysql-bootstrap" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931910 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerName="galera" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931924 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerName="galera" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.931962 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.931973 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932175 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-central-agent" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932188 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-central-agent" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932200 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fe9b6a-9cdf-4585-a585-474172306dd9" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932212 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fe9b6a-9cdf-4585-a585-474172306dd9" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932225 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932236 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932275 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="ovn-northd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932286 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="ovn-northd" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932318 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18710aa1-a99f-421b-9a4f-694362061773" containerName="dnsmasq-dns" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932330 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="18710aa1-a99f-421b-9a4f-694362061773" containerName="dnsmasq-dns" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932354 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerName="setup-container" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932367 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerName="setup-container" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932392 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29eea83-9d60-4101-a351-6f8468a8116c" containerName="memcached" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932404 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29eea83-9d60-4101-a351-6f8468a8116c" containerName="memcached" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932439 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="sg-core" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932451 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="sg-core" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932473 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-metadata" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932485 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-metadata" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932499 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932511 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932558 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932570 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-api" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932583 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0643aa92-2649-4c41-b16e-9a05aac93f35" containerName="keystone-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932593 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0643aa92-2649-4c41-b16e-9a05aac93f35" containerName="keystone-api" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932617 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="ovsdbserver-nb" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932629 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="ovsdbserver-nb" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932654 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ab13f6-4348-4848-9149-4d1ee240d1ed" containerName="nova-cell1-conductor-conductor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932665 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ab13f6-4348-4848-9149-4d1ee240d1ed" containerName="nova-cell1-conductor-conductor" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932701 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932714 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932741 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932752 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932791 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce5c750-265a-4589-8f5e-a9e6a846d0d0" containerName="nova-scheduler-scheduler" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932805 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce5c750-265a-4589-8f5e-a9e6a846d0d0" containerName="nova-scheduler-scheduler" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932839 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932851 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-server" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932873 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerName="rabbitmq" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932884 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerName="rabbitmq" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932909 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="ovsdbserver-sb" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932921 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="ovsdbserver-sb" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932936 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932948 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932969 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.932979 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.932991 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-expirer" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933003 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-expirer" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933023 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933034 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933045 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933056 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-server" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933070 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933081 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933113 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="swift-recon-cron" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933124 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="swift-recon-cron" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933136 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933147 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-server" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933177 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933189 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933224 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="probe" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933235 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="probe" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933268 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933279 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933315 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933327 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933364 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933376 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-server" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933402 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ed295f-7baa-466e-8a26-6d923a84d1b5" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933435 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ed295f-7baa-466e-8a26-6d923a84d1b5" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933473 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933484 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933506 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933517 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933554 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933566 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933576 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18710aa1-a99f-421b-9a4f-694362061773" containerName="init" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933586 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="18710aa1-a99f-421b-9a4f-694362061773" containerName="init" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933611 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933622 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933655 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050c0e99-7984-43be-8701-84602f0c9294" containerName="galera" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933666 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="050c0e99-7984-43be-8701-84602f0c9294" containerName="galera" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933702 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-updater" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933715 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-updater" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933742 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-updater" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933761 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-updater" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933787 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde619b2-b551-4a41-b2f2-c38f1b507a82" containerName="kube-state-metrics" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933797 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde619b2-b551-4a41-b2f2-c38f1b507a82" containerName="kube-state-metrics" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933832 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933843 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933877 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933889 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933901 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="rsync" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933911 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="rsync" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933921 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="proxy-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933931 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="proxy-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933954 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933965 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.933988 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.933998 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.934017 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.934029 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: E1008 13:41:40.934055 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6712e27-2a2f-43a8-8c79-dd7b5090d987" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.934067 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6712e27-2a2f-43a8-8c79-dd7b5090d987" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941828 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941867 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d28af9-b1bc-4475-abc6-9c33380349e9" containerName="nova-cell0-conductor-conductor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941890 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="ovn-northd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941912 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0643aa92-2649-4c41-b16e-9a05aac93f35" containerName="keystone-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941921 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eba221c-653d-434a-a486-16be41c4a5c4" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941936 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941959 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941980 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.941994 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942017 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="ovsdbserver-sb" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942025 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cea80f5-d915-459c-9882-4ce114929ab4" containerName="glance-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942047 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="03eb50e9-c0b5-4f96-8dd0-27d776f8c71e" containerName="galera" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942070 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-updater" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942088 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942113 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942134 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-updater" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942148 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fe9b6a-9cdf-4585-a585-474172306dd9" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942170 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942190 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="container-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942212 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60057cf-9f14-4fbd-b161-e27abcc9c7a5" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942226 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942241 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-replicator" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942249 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="rsync" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942270 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942290 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942297 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ab13f6-4348-4848-9149-4d1ee240d1ed" containerName="nova-cell1-conductor-conductor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942306 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5e8a94-d14f-4b2e-9a5f-a09c9f4e0cac" containerName="barbican-worker-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942314 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942323 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942330 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce5c750-265a-4589-8f5e-a9e6a846d0d0" containerName="nova-scheduler-scheduler" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942336 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29eea83-9d60-4101-a351-6f8468a8116c" containerName="memcached" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942341 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="proxy-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942353 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6712e27-2a2f-43a8-8c79-dd7b5090d987" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942362 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b97e55-65fa-4e4a-becd-d13dd95bb78a" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942375 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde619b2-b551-4a41-b2f2-c38f1b507a82" containerName="kube-state-metrics" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942384 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c2f3965-f057-4b1d-bbc9-7235ac48ed49" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942394 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5926be-c223-4cbc-b6e3-a16726aa6c84" containerName="placement-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942428 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="probe" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942448 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942454 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="18710aa1-a99f-421b-9a4f-694362061773" containerName="dnsmasq-dns" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942459 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-metadata" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942477 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942489 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f580765e-50e7-42a1-a798-325b80e29e9d" containerName="barbican-keystone-listener" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942500 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6ab417-1dfb-4427-a34e-fd8cf995b4c7" containerName="barbican-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942510 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3f72f8-a569-409f-a590-02a0f7fcdc81" containerName="neutron-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942516 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942522 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa6e8e72-d895-4018-a176-978d7975d8a6" containerName="glance-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942533 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="493c63a1-0210-4a70-a964-79522491fd05" containerName="nova-metadata-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942542 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-auditor" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942559 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-api" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942566 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4749b7e4-3896-474d-84b3-8ddf351a24ac" containerName="ovn-controller" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942573 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ed295f-7baa-466e-8a26-6d923a84d1b5" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942592 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a416f725-cd7c-4bd8-9123-28cad18157d9" containerName="rabbitmq" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942605 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbca12dd-73a9-4533-b424-ebaf0c8cec0c" containerName="nova-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942619 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b215a42c-d422-4db9-a83e-df79f7bff9e6" containerName="ovsdbserver-nb" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942632 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="050c0e99-7984-43be-8701-84602f0c9294" containerName="galera" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942645 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fd97a6-e936-4503-a238-97b63e01a7de" containerName="openstack-network-exporter" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942661 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="object-expirer" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942674 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d473a1f-35dc-4b20-a344-19c23f1c8c06" containerName="cinder-scheduler" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942680 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-notification-agent" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942695 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="sg-core" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942704 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d5e818-6480-4dfb-b8a2-50dc4ec58dad" containerName="cinder-api-log" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942716 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ec06a5-eadd-4545-a157-1aa731eabe13" containerName="ceilometer-central-agent" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942729 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1582f178-44ee-4e28-a6d6-1d6a29050b56" containerName="mariadb-account-delete" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942744 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovsdb-server" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942751 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f523d852-2e73-4168-b3ca-af18fa28cc07" containerName="ovs-vswitchd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942761 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="account-reaper" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942775 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="6470de54-fdec-4648-b941-1031c67f55ca" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942791 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19063d41-be34-463b-8bb7-d45f7d804602" containerName="swift-recon-cron" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942805 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf670f8-9cf6-4200-8036-05e9798cad78" containerName="proxy-httpd" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.942846 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3d89be-0a42-4a3d-914c-3bff67bd37b4" containerName="rabbitmq" Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.945045 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xp89v"] Oct 08 13:41:40 crc kubenswrapper[5065]: I1008 13:41:40.945163 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.064407 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-catalog-content\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.064560 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fc2l\" (UniqueName: \"kubernetes.io/projected/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-kube-api-access-2fc2l\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.064628 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-utilities\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.166814 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-catalog-content\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.167111 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fc2l\" (UniqueName: \"kubernetes.io/projected/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-kube-api-access-2fc2l\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.167224 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-utilities\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.167399 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-catalog-content\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.167737 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-utilities\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.191634 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fc2l\" (UniqueName: \"kubernetes.io/projected/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-kube-api-access-2fc2l\") pod \"redhat-operators-xp89v\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.274538 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:41 crc kubenswrapper[5065]: I1008 13:41:41.519981 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xp89v"] Oct 08 13:41:42 crc kubenswrapper[5065]: I1008 13:41:42.416736 5065 generic.go:334] "Generic (PLEG): container finished" podID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerID="8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919" exitCode=0 Oct 08 13:41:42 crc kubenswrapper[5065]: I1008 13:41:42.416785 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xp89v" event={"ID":"c04b482d-2693-4a20-b5b0-ad16dd75cf2c","Type":"ContainerDied","Data":"8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919"} Oct 08 13:41:42 crc kubenswrapper[5065]: I1008 13:41:42.416816 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xp89v" event={"ID":"c04b482d-2693-4a20-b5b0-ad16dd75cf2c","Type":"ContainerStarted","Data":"228f9d23f08df90d406ab8f6153a826f53a234db4f06710c2eb2cac7bbfab1ac"} Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.505256 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lwzw2"] Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.507223 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.529643 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwzw2"] Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.606571 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxtg\" (UniqueName: \"kubernetes.io/projected/4d0d9751-0be6-40d1-adb1-33abee551312-kube-api-access-xvxtg\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.606694 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-catalog-content\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.606772 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-utilities\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.708570 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-catalog-content\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.708920 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-utilities\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.708988 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxtg\" (UniqueName: \"kubernetes.io/projected/4d0d9751-0be6-40d1-adb1-33abee551312-kube-api-access-xvxtg\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.709033 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-catalog-content\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.709278 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-utilities\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.730156 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxtg\" (UniqueName: \"kubernetes.io/projected/4d0d9751-0be6-40d1-adb1-33abee551312-kube-api-access-xvxtg\") pod \"redhat-marketplace-lwzw2\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:43 crc kubenswrapper[5065]: I1008 13:41:43.843995 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:44 crc kubenswrapper[5065]: I1008 13:41:44.254741 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwzw2"] Oct 08 13:41:44 crc kubenswrapper[5065]: I1008 13:41:44.432488 5065 generic.go:334] "Generic (PLEG): container finished" podID="4d0d9751-0be6-40d1-adb1-33abee551312" containerID="103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb" exitCode=0 Oct 08 13:41:44 crc kubenswrapper[5065]: I1008 13:41:44.432575 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwzw2" event={"ID":"4d0d9751-0be6-40d1-adb1-33abee551312","Type":"ContainerDied","Data":"103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb"} Oct 08 13:41:44 crc kubenswrapper[5065]: I1008 13:41:44.432900 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwzw2" event={"ID":"4d0d9751-0be6-40d1-adb1-33abee551312","Type":"ContainerStarted","Data":"770ff0c312411ce3dab39d2a0894663b852437c8f03428afad54cedd20f450d2"} Oct 08 13:41:44 crc kubenswrapper[5065]: I1008 13:41:44.435456 5065 generic.go:334] "Generic (PLEG): container finished" podID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerID="21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329" exitCode=0 Oct 08 13:41:44 crc kubenswrapper[5065]: I1008 13:41:44.435484 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xp89v" event={"ID":"c04b482d-2693-4a20-b5b0-ad16dd75cf2c","Type":"ContainerDied","Data":"21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329"} Oct 08 13:41:45 crc kubenswrapper[5065]: I1008 13:41:45.461632 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xp89v" event={"ID":"c04b482d-2693-4a20-b5b0-ad16dd75cf2c","Type":"ContainerStarted","Data":"5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f"} Oct 08 13:41:45 crc kubenswrapper[5065]: I1008 13:41:45.491211 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xp89v" podStartSLOduration=3.035171879 podStartE2EDuration="5.491188665s" podCreationTimestamp="2025-10-08 13:41:40 +0000 UTC" firstStartedPulling="2025-10-08 13:41:42.418721164 +0000 UTC m=+1404.196102921" lastFinishedPulling="2025-10-08 13:41:44.87473793 +0000 UTC m=+1406.652119707" observedRunningTime="2025-10-08 13:41:45.487824844 +0000 UTC m=+1407.265206601" watchObservedRunningTime="2025-10-08 13:41:45.491188665 +0000 UTC m=+1407.268570432" Oct 08 13:41:46 crc kubenswrapper[5065]: I1008 13:41:46.480248 5065 generic.go:334] "Generic (PLEG): container finished" podID="4d0d9751-0be6-40d1-adb1-33abee551312" containerID="4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0" exitCode=0 Oct 08 13:41:46 crc kubenswrapper[5065]: I1008 13:41:46.480327 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwzw2" event={"ID":"4d0d9751-0be6-40d1-adb1-33abee551312","Type":"ContainerDied","Data":"4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0"} Oct 08 13:41:47 crc kubenswrapper[5065]: I1008 13:41:47.489542 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwzw2" event={"ID":"4d0d9751-0be6-40d1-adb1-33abee551312","Type":"ContainerStarted","Data":"67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369"} Oct 08 13:41:47 crc kubenswrapper[5065]: I1008 13:41:47.507119 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lwzw2" podStartSLOduration=1.867075733 podStartE2EDuration="4.507100692s" podCreationTimestamp="2025-10-08 13:41:43 +0000 UTC" firstStartedPulling="2025-10-08 13:41:44.434924039 +0000 UTC m=+1406.212305796" lastFinishedPulling="2025-10-08 13:41:47.074948998 +0000 UTC m=+1408.852330755" observedRunningTime="2025-10-08 13:41:47.505648273 +0000 UTC m=+1409.283030040" watchObservedRunningTime="2025-10-08 13:41:47.507100692 +0000 UTC m=+1409.284482449" Oct 08 13:41:51 crc kubenswrapper[5065]: I1008 13:41:51.275346 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:51 crc kubenswrapper[5065]: I1008 13:41:51.275732 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:51 crc kubenswrapper[5065]: I1008 13:41:51.341748 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:51 crc kubenswrapper[5065]: I1008 13:41:51.587250 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:52 crc kubenswrapper[5065]: I1008 13:41:52.494489 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xp89v"] Oct 08 13:41:53 crc kubenswrapper[5065]: I1008 13:41:53.546975 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xp89v" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="registry-server" containerID="cri-o://5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f" gracePeriod=2 Oct 08 13:41:53 crc kubenswrapper[5065]: I1008 13:41:53.844798 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:53 crc kubenswrapper[5065]: I1008 13:41:53.845231 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:53 crc kubenswrapper[5065]: I1008 13:41:53.904384 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:53 crc kubenswrapper[5065]: I1008 13:41:53.974675 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.097876 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-catalog-content\") pod \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.098050 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fc2l\" (UniqueName: \"kubernetes.io/projected/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-kube-api-access-2fc2l\") pod \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.098098 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-utilities\") pod \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\" (UID: \"c04b482d-2693-4a20-b5b0-ad16dd75cf2c\") " Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.099330 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-utilities" (OuterVolumeSpecName: "utilities") pod "c04b482d-2693-4a20-b5b0-ad16dd75cf2c" (UID: "c04b482d-2693-4a20-b5b0-ad16dd75cf2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.107829 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-kube-api-access-2fc2l" (OuterVolumeSpecName: "kube-api-access-2fc2l") pod "c04b482d-2693-4a20-b5b0-ad16dd75cf2c" (UID: "c04b482d-2693-4a20-b5b0-ad16dd75cf2c"). InnerVolumeSpecName "kube-api-access-2fc2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.199435 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fc2l\" (UniqueName: \"kubernetes.io/projected/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-kube-api-access-2fc2l\") on node \"crc\" DevicePath \"\"" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.199471 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.375088 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.375168 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.556588 5065 generic.go:334] "Generic (PLEG): container finished" podID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerID="5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f" exitCode=0 Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.556769 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xp89v" event={"ID":"c04b482d-2693-4a20-b5b0-ad16dd75cf2c","Type":"ContainerDied","Data":"5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f"} Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.556871 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xp89v" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.558216 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xp89v" event={"ID":"c04b482d-2693-4a20-b5b0-ad16dd75cf2c","Type":"ContainerDied","Data":"228f9d23f08df90d406ab8f6153a826f53a234db4f06710c2eb2cac7bbfab1ac"} Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.558291 5065 scope.go:117] "RemoveContainer" containerID="5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.595164 5065 scope.go:117] "RemoveContainer" containerID="21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.601098 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.629945 5065 scope.go:117] "RemoveContainer" containerID="8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.655599 5065 scope.go:117] "RemoveContainer" containerID="5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f" Oct 08 13:41:54 crc kubenswrapper[5065]: E1008 13:41:54.656186 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f\": container with ID starting with 5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f not found: ID does not exist" containerID="5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.656230 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f"} err="failed to get container status \"5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f\": rpc error: code = NotFound desc = could not find container \"5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f\": container with ID starting with 5fc88bee3fc4ad81c06080843770fe610037019a880e88baea593152d56ea21f not found: ID does not exist" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.656257 5065 scope.go:117] "RemoveContainer" containerID="21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329" Oct 08 13:41:54 crc kubenswrapper[5065]: E1008 13:41:54.656754 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329\": container with ID starting with 21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329 not found: ID does not exist" containerID="21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.656787 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329"} err="failed to get container status \"21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329\": rpc error: code = NotFound desc = could not find container \"21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329\": container with ID starting with 21ca93a7b5795a4cd938f8aadcd75ab05a974f77b201454b9a9ecff98d389329 not found: ID does not exist" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.656804 5065 scope.go:117] "RemoveContainer" containerID="8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919" Oct 08 13:41:54 crc kubenswrapper[5065]: E1008 13:41:54.657039 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919\": container with ID starting with 8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919 not found: ID does not exist" containerID="8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919" Oct 08 13:41:54 crc kubenswrapper[5065]: I1008 13:41:54.657064 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919"} err="failed to get container status \"8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919\": rpc error: code = NotFound desc = could not find container \"8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919\": container with ID starting with 8ab715f093bb8fe222d00ca186feb4a6557a99e665efdb79fa690a1ea8092919 not found: ID does not exist" Oct 08 13:41:55 crc kubenswrapper[5065]: I1008 13:41:55.153548 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c04b482d-2693-4a20-b5b0-ad16dd75cf2c" (UID: "c04b482d-2693-4a20-b5b0-ad16dd75cf2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:41:55 crc kubenswrapper[5065]: I1008 13:41:55.197362 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xp89v"] Oct 08 13:41:55 crc kubenswrapper[5065]: I1008 13:41:55.201659 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xp89v"] Oct 08 13:41:55 crc kubenswrapper[5065]: I1008 13:41:55.215935 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04b482d-2693-4a20-b5b0-ad16dd75cf2c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:41:55 crc kubenswrapper[5065]: I1008 13:41:55.689320 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwzw2"] Oct 08 13:41:56 crc kubenswrapper[5065]: I1008 13:41:56.580645 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lwzw2" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="registry-server" containerID="cri-o://67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369" gracePeriod=2 Oct 08 13:41:56 crc kubenswrapper[5065]: I1008 13:41:56.886408 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" path="/var/lib/kubelet/pods/c04b482d-2693-4a20-b5b0-ad16dd75cf2c/volumes" Oct 08 13:41:56 crc kubenswrapper[5065]: I1008 13:41:56.958389 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.143803 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-utilities\") pod \"4d0d9751-0be6-40d1-adb1-33abee551312\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.143855 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-catalog-content\") pod \"4d0d9751-0be6-40d1-adb1-33abee551312\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.143894 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvxtg\" (UniqueName: \"kubernetes.io/projected/4d0d9751-0be6-40d1-adb1-33abee551312-kube-api-access-xvxtg\") pod \"4d0d9751-0be6-40d1-adb1-33abee551312\" (UID: \"4d0d9751-0be6-40d1-adb1-33abee551312\") " Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.145492 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-utilities" (OuterVolumeSpecName: "utilities") pod "4d0d9751-0be6-40d1-adb1-33abee551312" (UID: "4d0d9751-0be6-40d1-adb1-33abee551312"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.153299 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0d9751-0be6-40d1-adb1-33abee551312-kube-api-access-xvxtg" (OuterVolumeSpecName: "kube-api-access-xvxtg") pod "4d0d9751-0be6-40d1-adb1-33abee551312" (UID: "4d0d9751-0be6-40d1-adb1-33abee551312"). InnerVolumeSpecName "kube-api-access-xvxtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.163223 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d0d9751-0be6-40d1-adb1-33abee551312" (UID: "4d0d9751-0be6-40d1-adb1-33abee551312"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.245715 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.245755 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvxtg\" (UniqueName: \"kubernetes.io/projected/4d0d9751-0be6-40d1-adb1-33abee551312-kube-api-access-xvxtg\") on node \"crc\" DevicePath \"\"" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.245770 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0d9751-0be6-40d1-adb1-33abee551312-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.596005 5065 generic.go:334] "Generic (PLEG): container finished" podID="4d0d9751-0be6-40d1-adb1-33abee551312" containerID="67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369" exitCode=0 Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.596072 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwzw2" event={"ID":"4d0d9751-0be6-40d1-adb1-33abee551312","Type":"ContainerDied","Data":"67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369"} Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.596090 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwzw2" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.596111 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwzw2" event={"ID":"4d0d9751-0be6-40d1-adb1-33abee551312","Type":"ContainerDied","Data":"770ff0c312411ce3dab39d2a0894663b852437c8f03428afad54cedd20f450d2"} Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.596152 5065 scope.go:117] "RemoveContainer" containerID="67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.618733 5065 scope.go:117] "RemoveContainer" containerID="4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.639063 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwzw2"] Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.645385 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwzw2"] Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.662058 5065 scope.go:117] "RemoveContainer" containerID="103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.679051 5065 scope.go:117] "RemoveContainer" containerID="67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369" Oct 08 13:41:57 crc kubenswrapper[5065]: E1008 13:41:57.679576 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369\": container with ID starting with 67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369 not found: ID does not exist" containerID="67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.679612 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369"} err="failed to get container status \"67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369\": rpc error: code = NotFound desc = could not find container \"67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369\": container with ID starting with 67957efdabcf7279e38085a302e6745f0b822400f8f5dcbfa8c2513827de3369 not found: ID does not exist" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.679637 5065 scope.go:117] "RemoveContainer" containerID="4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0" Oct 08 13:41:57 crc kubenswrapper[5065]: E1008 13:41:57.680059 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0\": container with ID starting with 4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0 not found: ID does not exist" containerID="4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.680088 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0"} err="failed to get container status \"4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0\": rpc error: code = NotFound desc = could not find container \"4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0\": container with ID starting with 4c5b635cc8c6e949deabb484def16d6f8dbe1c400c9f27752c37137f46f867e0 not found: ID does not exist" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.680105 5065 scope.go:117] "RemoveContainer" containerID="103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb" Oct 08 13:41:57 crc kubenswrapper[5065]: E1008 13:41:57.680488 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb\": container with ID starting with 103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb not found: ID does not exist" containerID="103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb" Oct 08 13:41:57 crc kubenswrapper[5065]: I1008 13:41:57.680520 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb"} err="failed to get container status \"103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb\": rpc error: code = NotFound desc = could not find container \"103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb\": container with ID starting with 103f10bb2a36e64f8f28e5ff693b59919ebaed0f52805ee774f9a095bd9440eb not found: ID does not exist" Oct 08 13:41:58 crc kubenswrapper[5065]: I1008 13:41:58.892228 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" path="/var/lib/kubelet/pods/4d0d9751-0be6-40d1-adb1-33abee551312/volumes" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.937035 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2c6m6"] Oct 08 13:42:05 crc kubenswrapper[5065]: E1008 13:42:05.941982 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="extract-content" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.942148 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="extract-content" Oct 08 13:42:05 crc kubenswrapper[5065]: E1008 13:42:05.942223 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="extract-utilities" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.942273 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="extract-utilities" Oct 08 13:42:05 crc kubenswrapper[5065]: E1008 13:42:05.942354 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="registry-server" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.942431 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="registry-server" Oct 08 13:42:05 crc kubenswrapper[5065]: E1008 13:42:05.942501 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="extract-content" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.942640 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="extract-content" Oct 08 13:42:05 crc kubenswrapper[5065]: E1008 13:42:05.942697 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="extract-utilities" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.942753 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="extract-utilities" Oct 08 13:42:05 crc kubenswrapper[5065]: E1008 13:42:05.942820 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="registry-server" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.942875 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="registry-server" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.943113 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04b482d-2693-4a20-b5b0-ad16dd75cf2c" containerName="registry-server" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.943262 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0d9751-0be6-40d1-adb1-33abee551312" containerName="registry-server" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.944293 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:05 crc kubenswrapper[5065]: I1008 13:42:05.950311 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2c6m6"] Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.101668 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-catalog-content\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.102071 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-utilities\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.102247 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb5w\" (UniqueName: \"kubernetes.io/projected/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-kube-api-access-scb5w\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.203758 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-catalog-content\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.204034 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-utilities\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.204123 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb5w\" (UniqueName: \"kubernetes.io/projected/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-kube-api-access-scb5w\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.204196 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-catalog-content\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.204328 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-utilities\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.226199 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb5w\" (UniqueName: \"kubernetes.io/projected/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-kube-api-access-scb5w\") pod \"certified-operators-2c6m6\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.315392 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:06 crc kubenswrapper[5065]: I1008 13:42:06.761208 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2c6m6"] Oct 08 13:42:07 crc kubenswrapper[5065]: I1008 13:42:07.703289 5065 generic.go:334] "Generic (PLEG): container finished" podID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerID="725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729" exitCode=0 Oct 08 13:42:07 crc kubenswrapper[5065]: I1008 13:42:07.703528 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c6m6" event={"ID":"06c283e8-d0a0-4a5a-8641-959b21cb3a3e","Type":"ContainerDied","Data":"725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729"} Oct 08 13:42:07 crc kubenswrapper[5065]: I1008 13:42:07.703693 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c6m6" event={"ID":"06c283e8-d0a0-4a5a-8641-959b21cb3a3e","Type":"ContainerStarted","Data":"4e43d844d02055bc08f0ad6abe194433a5d9d94fb79499526b3fc18ec53ec53d"} Oct 08 13:42:09 crc kubenswrapper[5065]: I1008 13:42:09.722939 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c6m6" event={"ID":"06c283e8-d0a0-4a5a-8641-959b21cb3a3e","Type":"ContainerStarted","Data":"753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc"} Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.723128 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxl5x"] Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.725739 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.739653 5065 generic.go:334] "Generic (PLEG): container finished" podID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerID="753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc" exitCode=0 Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.739708 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c6m6" event={"ID":"06c283e8-d0a0-4a5a-8641-959b21cb3a3e","Type":"ContainerDied","Data":"753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc"} Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.743961 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxl5x"] Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.871542 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-catalog-content\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.871855 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-utilities\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.872028 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gc5\" (UniqueName: \"kubernetes.io/projected/87f56fe4-7da9-4462-9eba-aafaeb95aaea-kube-api-access-c9gc5\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.973268 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-catalog-content\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.973330 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-utilities\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.973391 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9gc5\" (UniqueName: \"kubernetes.io/projected/87f56fe4-7da9-4462-9eba-aafaeb95aaea-kube-api-access-c9gc5\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.973814 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-catalog-content\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.974115 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-utilities\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:10 crc kubenswrapper[5065]: I1008 13:42:10.993943 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9gc5\" (UniqueName: \"kubernetes.io/projected/87f56fe4-7da9-4462-9eba-aafaeb95aaea-kube-api-access-c9gc5\") pod \"community-operators-zxl5x\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:11 crc kubenswrapper[5065]: I1008 13:42:11.060864 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:11 crc kubenswrapper[5065]: I1008 13:42:11.530157 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxl5x"] Oct 08 13:42:11 crc kubenswrapper[5065]: W1008 13:42:11.534707 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87f56fe4_7da9_4462_9eba_aafaeb95aaea.slice/crio-52557c637fe8085efc9aa7f1cee42b479d335bdd43d6848fda8e75c98f8cb9cd WatchSource:0}: Error finding container 52557c637fe8085efc9aa7f1cee42b479d335bdd43d6848fda8e75c98f8cb9cd: Status 404 returned error can't find the container with id 52557c637fe8085efc9aa7f1cee42b479d335bdd43d6848fda8e75c98f8cb9cd Oct 08 13:42:11 crc kubenswrapper[5065]: I1008 13:42:11.762070 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c6m6" event={"ID":"06c283e8-d0a0-4a5a-8641-959b21cb3a3e","Type":"ContainerStarted","Data":"f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d"} Oct 08 13:42:11 crc kubenswrapper[5065]: I1008 13:42:11.764555 5065 generic.go:334] "Generic (PLEG): container finished" podID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerID="73c886d23b7fce33721236acacaf9e4620fe7fc93f491a3b244e34ad8780fb53" exitCode=0 Oct 08 13:42:11 crc kubenswrapper[5065]: I1008 13:42:11.764587 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxl5x" event={"ID":"87f56fe4-7da9-4462-9eba-aafaeb95aaea","Type":"ContainerDied","Data":"73c886d23b7fce33721236acacaf9e4620fe7fc93f491a3b244e34ad8780fb53"} Oct 08 13:42:11 crc kubenswrapper[5065]: I1008 13:42:11.764605 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxl5x" event={"ID":"87f56fe4-7da9-4462-9eba-aafaeb95aaea","Type":"ContainerStarted","Data":"52557c637fe8085efc9aa7f1cee42b479d335bdd43d6848fda8e75c98f8cb9cd"} Oct 08 13:42:11 crc kubenswrapper[5065]: I1008 13:42:11.783371 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2c6m6" podStartSLOduration=3.200826386 podStartE2EDuration="6.783351793s" podCreationTimestamp="2025-10-08 13:42:05 +0000 UTC" firstStartedPulling="2025-10-08 13:42:07.707846356 +0000 UTC m=+1429.485228113" lastFinishedPulling="2025-10-08 13:42:11.290371763 +0000 UTC m=+1433.067753520" observedRunningTime="2025-10-08 13:42:11.782557801 +0000 UTC m=+1433.559939588" watchObservedRunningTime="2025-10-08 13:42:11.783351793 +0000 UTC m=+1433.560733540" Oct 08 13:42:12 crc kubenswrapper[5065]: I1008 13:42:12.774989 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxl5x" event={"ID":"87f56fe4-7da9-4462-9eba-aafaeb95aaea","Type":"ContainerStarted","Data":"f44e6eec568a60547f322bf6574c39d18c49a173d6947fb540716069df34b5e4"} Oct 08 13:42:13 crc kubenswrapper[5065]: I1008 13:42:13.787169 5065 generic.go:334] "Generic (PLEG): container finished" podID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerID="f44e6eec568a60547f322bf6574c39d18c49a173d6947fb540716069df34b5e4" exitCode=0 Oct 08 13:42:13 crc kubenswrapper[5065]: I1008 13:42:13.787257 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxl5x" event={"ID":"87f56fe4-7da9-4462-9eba-aafaeb95aaea","Type":"ContainerDied","Data":"f44e6eec568a60547f322bf6574c39d18c49a173d6947fb540716069df34b5e4"} Oct 08 13:42:14 crc kubenswrapper[5065]: I1008 13:42:14.800651 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxl5x" event={"ID":"87f56fe4-7da9-4462-9eba-aafaeb95aaea","Type":"ContainerStarted","Data":"a42bdab02d5774fd25d425c24620baa5f73a1ced8bbb7e3881268c384c26f82b"} Oct 08 13:42:14 crc kubenswrapper[5065]: I1008 13:42:14.825696 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxl5x" podStartSLOduration=2.384560031 podStartE2EDuration="4.825676814s" podCreationTimestamp="2025-10-08 13:42:10 +0000 UTC" firstStartedPulling="2025-10-08 13:42:11.765584247 +0000 UTC m=+1433.542966004" lastFinishedPulling="2025-10-08 13:42:14.20670103 +0000 UTC m=+1435.984082787" observedRunningTime="2025-10-08 13:42:14.820633023 +0000 UTC m=+1436.598014790" watchObservedRunningTime="2025-10-08 13:42:14.825676814 +0000 UTC m=+1436.603058571" Oct 08 13:42:16 crc kubenswrapper[5065]: I1008 13:42:16.315935 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:16 crc kubenswrapper[5065]: I1008 13:42:16.315994 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:16 crc kubenswrapper[5065]: I1008 13:42:16.370986 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:16 crc kubenswrapper[5065]: I1008 13:42:16.882391 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:18 crc kubenswrapper[5065]: I1008 13:42:18.513580 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2c6m6"] Oct 08 13:42:18 crc kubenswrapper[5065]: I1008 13:42:18.848004 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2c6m6" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="registry-server" containerID="cri-o://f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d" gracePeriod=2 Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.285843 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.397150 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-utilities\") pod \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.397280 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-catalog-content\") pod \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.397479 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scb5w\" (UniqueName: \"kubernetes.io/projected/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-kube-api-access-scb5w\") pod \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\" (UID: \"06c283e8-d0a0-4a5a-8641-959b21cb3a3e\") " Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.398130 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-utilities" (OuterVolumeSpecName: "utilities") pod "06c283e8-d0a0-4a5a-8641-959b21cb3a3e" (UID: "06c283e8-d0a0-4a5a-8641-959b21cb3a3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.409658 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-kube-api-access-scb5w" (OuterVolumeSpecName: "kube-api-access-scb5w") pod "06c283e8-d0a0-4a5a-8641-959b21cb3a3e" (UID: "06c283e8-d0a0-4a5a-8641-959b21cb3a3e"). InnerVolumeSpecName "kube-api-access-scb5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.498812 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.498840 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scb5w\" (UniqueName: \"kubernetes.io/projected/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-kube-api-access-scb5w\") on node \"crc\" DevicePath \"\"" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.625170 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06c283e8-d0a0-4a5a-8641-959b21cb3a3e" (UID: "06c283e8-d0a0-4a5a-8641-959b21cb3a3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.701503 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c283e8-d0a0-4a5a-8641-959b21cb3a3e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.861679 5065 generic.go:334] "Generic (PLEG): container finished" podID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerID="f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d" exitCode=0 Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.861760 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c6m6" event={"ID":"06c283e8-d0a0-4a5a-8641-959b21cb3a3e","Type":"ContainerDied","Data":"f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d"} Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.861811 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c6m6" event={"ID":"06c283e8-d0a0-4a5a-8641-959b21cb3a3e","Type":"ContainerDied","Data":"4e43d844d02055bc08f0ad6abe194433a5d9d94fb79499526b3fc18ec53ec53d"} Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.861814 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c6m6" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.861842 5065 scope.go:117] "RemoveContainer" containerID="f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.896987 5065 scope.go:117] "RemoveContainer" containerID="753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.910987 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2c6m6"] Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.918535 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2c6m6"] Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.957713 5065 scope.go:117] "RemoveContainer" containerID="725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.978959 5065 scope.go:117] "RemoveContainer" containerID="f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d" Oct 08 13:42:19 crc kubenswrapper[5065]: E1008 13:42:19.979606 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d\": container with ID starting with f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d not found: ID does not exist" containerID="f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.979673 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d"} err="failed to get container status \"f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d\": rpc error: code = NotFound desc = could not find container \"f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d\": container with ID starting with f025db36e37db87b61c0ac4cb1452b1ba385188dc26664b5151141ac17686b6d not found: ID does not exist" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.979725 5065 scope.go:117] "RemoveContainer" containerID="753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc" Oct 08 13:42:19 crc kubenswrapper[5065]: E1008 13:42:19.980069 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc\": container with ID starting with 753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc not found: ID does not exist" containerID="753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.980102 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc"} err="failed to get container status \"753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc\": rpc error: code = NotFound desc = could not find container \"753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc\": container with ID starting with 753ece322ca97ef6df0d7522b152c22cfc040304f85d434c084549adec94e3fc not found: ID does not exist" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.980121 5065 scope.go:117] "RemoveContainer" containerID="725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729" Oct 08 13:42:19 crc kubenswrapper[5065]: E1008 13:42:19.980439 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729\": container with ID starting with 725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729 not found: ID does not exist" containerID="725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729" Oct 08 13:42:19 crc kubenswrapper[5065]: I1008 13:42:19.980472 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729"} err="failed to get container status \"725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729\": rpc error: code = NotFound desc = could not find container \"725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729\": container with ID starting with 725b8442dfa7dbb1fc1af34755892032bfb5abec1bafbc1ecc5667865f011729 not found: ID does not exist" Oct 08 13:42:20 crc kubenswrapper[5065]: I1008 13:42:20.882256 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" path="/var/lib/kubelet/pods/06c283e8-d0a0-4a5a-8641-959b21cb3a3e/volumes" Oct 08 13:42:21 crc kubenswrapper[5065]: I1008 13:42:21.061345 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:21 crc kubenswrapper[5065]: I1008 13:42:21.061382 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:21 crc kubenswrapper[5065]: I1008 13:42:21.109971 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:21 crc kubenswrapper[5065]: I1008 13:42:21.965640 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:22 crc kubenswrapper[5065]: I1008 13:42:22.709090 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxl5x"] Oct 08 13:42:23 crc kubenswrapper[5065]: I1008 13:42:23.902639 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxl5x" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="registry-server" containerID="cri-o://a42bdab02d5774fd25d425c24620baa5f73a1ced8bbb7e3881268c384c26f82b" gracePeriod=2 Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.375514 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.375595 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.919361 5065 generic.go:334] "Generic (PLEG): container finished" podID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerID="a42bdab02d5774fd25d425c24620baa5f73a1ced8bbb7e3881268c384c26f82b" exitCode=0 Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.919440 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxl5x" event={"ID":"87f56fe4-7da9-4462-9eba-aafaeb95aaea","Type":"ContainerDied","Data":"a42bdab02d5774fd25d425c24620baa5f73a1ced8bbb7e3881268c384c26f82b"} Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.919476 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxl5x" event={"ID":"87f56fe4-7da9-4462-9eba-aafaeb95aaea","Type":"ContainerDied","Data":"52557c637fe8085efc9aa7f1cee42b479d335bdd43d6848fda8e75c98f8cb9cd"} Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.919490 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52557c637fe8085efc9aa7f1cee42b479d335bdd43d6848fda8e75c98f8cb9cd" Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.927216 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.990882 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9gc5\" (UniqueName: \"kubernetes.io/projected/87f56fe4-7da9-4462-9eba-aafaeb95aaea-kube-api-access-c9gc5\") pod \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.990950 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-utilities\") pod \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.991006 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-catalog-content\") pod \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\" (UID: \"87f56fe4-7da9-4462-9eba-aafaeb95aaea\") " Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.992123 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-utilities" (OuterVolumeSpecName: "utilities") pod "87f56fe4-7da9-4462-9eba-aafaeb95aaea" (UID: "87f56fe4-7da9-4462-9eba-aafaeb95aaea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:42:24 crc kubenswrapper[5065]: I1008 13:42:24.996442 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f56fe4-7da9-4462-9eba-aafaeb95aaea-kube-api-access-c9gc5" (OuterVolumeSpecName: "kube-api-access-c9gc5") pod "87f56fe4-7da9-4462-9eba-aafaeb95aaea" (UID: "87f56fe4-7da9-4462-9eba-aafaeb95aaea"). InnerVolumeSpecName "kube-api-access-c9gc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.039709 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f56fe4-7da9-4462-9eba-aafaeb95aaea" (UID: "87f56fe4-7da9-4462-9eba-aafaeb95aaea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.093269 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.093608 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9gc5\" (UniqueName: \"kubernetes.io/projected/87f56fe4-7da9-4462-9eba-aafaeb95aaea-kube-api-access-c9gc5\") on node \"crc\" DevicePath \"\"" Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.093783 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f56fe4-7da9-4462-9eba-aafaeb95aaea-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.932346 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxl5x" Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.975149 5065 scope.go:117] "RemoveContainer" containerID="5ba00a4ab123b3b923191d3fab315b3342b75bc684a655689917fabd7ba6ebed" Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.980713 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxl5x"] Oct 08 13:42:25 crc kubenswrapper[5065]: I1008 13:42:25.991238 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxl5x"] Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.001969 5065 scope.go:117] "RemoveContainer" containerID="74a0260deb479ffd4631be8552dcd86ba1855d8f7296f9a609726174337599e4" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.068799 5065 scope.go:117] "RemoveContainer" containerID="dbeb728c095d1a4f766fd0ec012ec6be147cd0597e1ac2d0a832007dba303de8" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.096289 5065 scope.go:117] "RemoveContainer" containerID="d653154c9971e3cfa954bc36e38b40b4d923a999ac76c039bc335c1184665c31" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.117802 5065 scope.go:117] "RemoveContainer" containerID="efa34a603bc3b0d53f772ed59ce1e71ad2f2f38f3afc75ef7c8ed98efcb40e44" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.139538 5065 scope.go:117] "RemoveContainer" containerID="18cb61ad73086df94f6a3e8295ee1d22474fda18c12c345421b646c270cb0232" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.173381 5065 scope.go:117] "RemoveContainer" containerID="8e0abc75176f532b668e3aba769afb7b2bfe0a652480850844af97de5ed2dc38" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.195872 5065 scope.go:117] "RemoveContainer" containerID="a76703a08c83f16e155e4a2515fe2a55f05d42fe92c2bc2d586cf48ce2f3aca2" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.217789 5065 scope.go:117] "RemoveContainer" containerID="240b3c89b50ae321407c1cf6aa488343699d4a7a372717dfbfb42f1a27654d70" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.235342 5065 scope.go:117] "RemoveContainer" containerID="b282511f72530c5e6ac5ab13e197f2eae05c1a6b4a58b67bb604f364e59ff084" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.252840 5065 scope.go:117] "RemoveContainer" containerID="f67790086ee970d905212975fc43ca99ba502b657df1fa6b0d332e8df0913c96" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.291269 5065 scope.go:117] "RemoveContainer" containerID="2af10c1dd3063eb7dbf653e3d9f9df572dae8ef522034f34e2149cf77f826cbf" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.312201 5065 scope.go:117] "RemoveContainer" containerID="912bf4ebfd4305ec91c794cfaf719918b4246027bd8bb8e15acf893c8876449d" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.331138 5065 scope.go:117] "RemoveContainer" containerID="7fc9b44c4c76329a137e71e77cf8552d2747dd91200b1b2a25ecec548a4f3af5" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.358006 5065 scope.go:117] "RemoveContainer" containerID="767b5650176872a830e047466d129d41f486e6bcabd932592c416ec0038e7814" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.380080 5065 scope.go:117] "RemoveContainer" containerID="dea03e50a71ed799f2a49f0eb48790a19814692deaadf712ca9793f2249fcd3e" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.411375 5065 scope.go:117] "RemoveContainer" containerID="5b2cf90c955e396a351727202956af40d4d62861367cf9be625f0e7e9590bf18" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.444024 5065 scope.go:117] "RemoveContainer" containerID="9c1b2cb3bba97162839d7321cc87203aba920e07e4f99928885d1e4d4f6be3cf" Oct 08 13:42:26 crc kubenswrapper[5065]: I1008 13:42:26.885552 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" path="/var/lib/kubelet/pods/87f56fe4-7da9-4462-9eba-aafaeb95aaea/volumes" Oct 08 13:42:54 crc kubenswrapper[5065]: I1008 13:42:54.374940 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:42:54 crc kubenswrapper[5065]: I1008 13:42:54.375400 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:42:54 crc kubenswrapper[5065]: I1008 13:42:54.375465 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:42:54 crc kubenswrapper[5065]: I1008 13:42:54.376051 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:42:54 crc kubenswrapper[5065]: I1008 13:42:54.376101 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" gracePeriod=600 Oct 08 13:42:54 crc kubenswrapper[5065]: E1008 13:42:54.508326 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:42:55 crc kubenswrapper[5065]: I1008 13:42:55.209031 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" exitCode=0 Oct 08 13:42:55 crc kubenswrapper[5065]: I1008 13:42:55.209082 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c"} Oct 08 13:42:55 crc kubenswrapper[5065]: I1008 13:42:55.209120 5065 scope.go:117] "RemoveContainer" containerID="5fd12d0a8c18886d62fe0f77c00a82717c3aaf19bdc8e84b083c3e64ad847f5b" Oct 08 13:42:55 crc kubenswrapper[5065]: I1008 13:42:55.209661 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:42:55 crc kubenswrapper[5065]: E1008 13:42:55.210860 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:43:05 crc kubenswrapper[5065]: I1008 13:43:05.874162 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:43:05 crc kubenswrapper[5065]: E1008 13:43:05.874880 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:43:18 crc kubenswrapper[5065]: I1008 13:43:18.886491 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:43:18 crc kubenswrapper[5065]: E1008 13:43:18.887246 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.760472 5065 scope.go:117] "RemoveContainer" containerID="e1798572ea57cfc0ec49a70e753277538e330c3d721fda3351da9d563af5e085" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.785058 5065 scope.go:117] "RemoveContainer" containerID="16bba1324f0442b219591e075d3110a041e45127e394932b36be8c1e6f2c21f2" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.832692 5065 scope.go:117] "RemoveContainer" containerID="59a606638a8f15bd4f91168bdf8deccda702a925eb4c86f515c45335f2c32e92" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.846775 5065 scope.go:117] "RemoveContainer" containerID="50463d675c646987c36ceb2f2ed3b6f11964f4129f88e7663cf30f72d3c799c5" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.882000 5065 scope.go:117] "RemoveContainer" containerID="8f9a531617fee5e3dd656a56664dbd0591c6269efbd6664e714313e687b6be8f" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.924971 5065 scope.go:117] "RemoveContainer" containerID="282626a68ae112e2a7cb36758619cc926c9f8f75cf3cc744534bbd577bef1de1" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.943252 5065 scope.go:117] "RemoveContainer" containerID="a0226c22195545becdc2de24894b17738f864dbedcd9a6e394274a3f8d7be4b3" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.971577 5065 scope.go:117] "RemoveContainer" containerID="33ceec7bd32526aadb9fdccd627ea9d6843d1eeca2d61fca38b266e771abc801" Oct 08 13:43:26 crc kubenswrapper[5065]: I1008 13:43:26.991232 5065 scope.go:117] "RemoveContainer" containerID="53a13a8cc7470ca3fcf00e1a1762fd3a353cd97e981efeec2278512dabac8016" Oct 08 13:43:27 crc kubenswrapper[5065]: I1008 13:43:27.009298 5065 scope.go:117] "RemoveContainer" containerID="0e1463d6dafc9375c3fe50606be6afdbff0ecc4ff69847c5fb6972ed597e6323" Oct 08 13:43:27 crc kubenswrapper[5065]: I1008 13:43:27.025915 5065 scope.go:117] "RemoveContainer" containerID="1899c55c276614b9dc345b66b3715c43a46119b2159f018266e34050dda8d031" Oct 08 13:43:27 crc kubenswrapper[5065]: I1008 13:43:27.048724 5065 scope.go:117] "RemoveContainer" containerID="aacb565316241f85f3d1791edc7c96769deec36cdab2eb82624ae8b5b5f4a50f" Oct 08 13:43:33 crc kubenswrapper[5065]: I1008 13:43:33.873659 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:43:33 crc kubenswrapper[5065]: E1008 13:43:33.875410 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:43:48 crc kubenswrapper[5065]: I1008 13:43:48.877144 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:43:48 crc kubenswrapper[5065]: E1008 13:43:48.877890 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:44:01 crc kubenswrapper[5065]: I1008 13:44:01.874548 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:44:01 crc kubenswrapper[5065]: E1008 13:44:01.875435 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:44:17 crc kubenswrapper[5065]: I1008 13:44:17.874299 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:44:17 crc kubenswrapper[5065]: E1008 13:44:17.875901 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.203916 5065 scope.go:117] "RemoveContainer" containerID="7874ec49d85eba9cfb91c420e3419d8a588bba8034f460f6177db2652682d633" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.237449 5065 scope.go:117] "RemoveContainer" containerID="b8ae852d80aae6acad85780abb16c8854f1fddd5178e43292b6c487839c29fc5" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.262783 5065 scope.go:117] "RemoveContainer" containerID="6e6bc70c0d9042c36b539f3256634e4e3089df774e755ee10132f50e60ab06f9" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.281830 5065 scope.go:117] "RemoveContainer" containerID="ca9d0476322fc49313322ef95c7d2220e14767e0f7cd2dbdb105164851cc3638" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.309637 5065 scope.go:117] "RemoveContainer" containerID="83fabf1eea9a4b46559df8e9fbd02592791da0d4384f0b4cfafb802cf3572c8d" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.358045 5065 scope.go:117] "RemoveContainer" containerID="6bbb9432076f2bffcec8c43a13c477be8b205975b484bb5a7624470f0199c513" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.378811 5065 scope.go:117] "RemoveContainer" containerID="bc51ebb968114d471bff7d14523f9bde9ceaa701634b0fc011f4d1ffe0c615db" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.400176 5065 scope.go:117] "RemoveContainer" containerID="7075a38356042d5e366d951a3d1e78c1621a5061aada6f743076696bfacd2c63" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.424836 5065 scope.go:117] "RemoveContainer" containerID="3c914d8505b39861ccdfff5dcc013fed1cc1c91fa6e400a924167a096e07326c" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.445656 5065 scope.go:117] "RemoveContainer" containerID="705aeed5a07d4673f5ecaaa303c27972ce239afc15c12136ef9136b3fdea069f" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.466895 5065 scope.go:117] "RemoveContainer" containerID="4bbb945b81810d9acdd912221c5f3e90864bd3b29747eef97e3c2f2775c00372" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.487588 5065 scope.go:117] "RemoveContainer" containerID="25080b897bc4cb453bc113e3181f0a3b3ad3ee88b0edd40cc6b6bbfbfa6f2825" Oct 08 13:44:27 crc kubenswrapper[5065]: I1008 13:44:27.502695 5065 scope.go:117] "RemoveContainer" containerID="1bf0529c90d0ac2dd5a5b0e88e4e632725a93d83c5bbb3394596dcada46534fe" Oct 08 13:44:32 crc kubenswrapper[5065]: I1008 13:44:32.873750 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:44:32 crc kubenswrapper[5065]: E1008 13:44:32.874565 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:44:43 crc kubenswrapper[5065]: I1008 13:44:43.873335 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:44:43 crc kubenswrapper[5065]: E1008 13:44:43.874171 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:44:58 crc kubenswrapper[5065]: I1008 13:44:58.882610 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:44:58 crc kubenswrapper[5065]: E1008 13:44:58.883351 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.150188 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl"] Oct 08 13:45:00 crc kubenswrapper[5065]: E1008 13:45:00.150832 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="registry-server" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.150849 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="registry-server" Oct 08 13:45:00 crc kubenswrapper[5065]: E1008 13:45:00.150863 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="extract-utilities" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.150872 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="extract-utilities" Oct 08 13:45:00 crc kubenswrapper[5065]: E1008 13:45:00.150881 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="extract-content" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.150888 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="extract-content" Oct 08 13:45:00 crc kubenswrapper[5065]: E1008 13:45:00.150902 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="registry-server" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.150909 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="registry-server" Oct 08 13:45:00 crc kubenswrapper[5065]: E1008 13:45:00.150930 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="extract-content" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.150938 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="extract-content" Oct 08 13:45:00 crc kubenswrapper[5065]: E1008 13:45:00.150959 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="extract-utilities" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.150967 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="extract-utilities" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.151120 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c283e8-d0a0-4a5a-8641-959b21cb3a3e" containerName="registry-server" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.151139 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f56fe4-7da9-4462-9eba-aafaeb95aaea" containerName="registry-server" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.151687 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.153485 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.166837 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.182621 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl"] Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.221901 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj7fc\" (UniqueName: \"kubernetes.io/projected/0f327948-557a-4479-b298-36c36ab07724-kube-api-access-xj7fc\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.221977 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f327948-557a-4479-b298-36c36ab07724-config-volume\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.222039 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f327948-557a-4479-b298-36c36ab07724-secret-volume\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.323253 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj7fc\" (UniqueName: \"kubernetes.io/projected/0f327948-557a-4479-b298-36c36ab07724-kube-api-access-xj7fc\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.323323 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f327948-557a-4479-b298-36c36ab07724-config-volume\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.323526 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f327948-557a-4479-b298-36c36ab07724-secret-volume\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.324322 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f327948-557a-4479-b298-36c36ab07724-config-volume\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.331782 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f327948-557a-4479-b298-36c36ab07724-secret-volume\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.344396 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj7fc\" (UniqueName: \"kubernetes.io/projected/0f327948-557a-4479-b298-36c36ab07724-kube-api-access-xj7fc\") pod \"collect-profiles-29332185-lx4fl\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.501785 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:00 crc kubenswrapper[5065]: I1008 13:45:00.924632 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl"] Oct 08 13:45:01 crc kubenswrapper[5065]: I1008 13:45:01.283155 5065 generic.go:334] "Generic (PLEG): container finished" podID="0f327948-557a-4479-b298-36c36ab07724" containerID="dcaf9326193530dec195f47164bfd2ea22d630f6421aa83e63983cb9ceea9630" exitCode=0 Oct 08 13:45:01 crc kubenswrapper[5065]: I1008 13:45:01.283204 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" event={"ID":"0f327948-557a-4479-b298-36c36ab07724","Type":"ContainerDied","Data":"dcaf9326193530dec195f47164bfd2ea22d630f6421aa83e63983cb9ceea9630"} Oct 08 13:45:01 crc kubenswrapper[5065]: I1008 13:45:01.283488 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" event={"ID":"0f327948-557a-4479-b298-36c36ab07724","Type":"ContainerStarted","Data":"94f0f6f6dff5e610ebedf70a660c2411d7c946b4a551bd472a3d0201767e9199"} Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.568622 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.657799 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f327948-557a-4479-b298-36c36ab07724-config-volume\") pod \"0f327948-557a-4479-b298-36c36ab07724\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.657851 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f327948-557a-4479-b298-36c36ab07724-secret-volume\") pod \"0f327948-557a-4479-b298-36c36ab07724\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.657884 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj7fc\" (UniqueName: \"kubernetes.io/projected/0f327948-557a-4479-b298-36c36ab07724-kube-api-access-xj7fc\") pod \"0f327948-557a-4479-b298-36c36ab07724\" (UID: \"0f327948-557a-4479-b298-36c36ab07724\") " Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.658849 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f327948-557a-4479-b298-36c36ab07724-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f327948-557a-4479-b298-36c36ab07724" (UID: "0f327948-557a-4479-b298-36c36ab07724"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.664067 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f327948-557a-4479-b298-36c36ab07724-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f327948-557a-4479-b298-36c36ab07724" (UID: "0f327948-557a-4479-b298-36c36ab07724"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.664597 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f327948-557a-4479-b298-36c36ab07724-kube-api-access-xj7fc" (OuterVolumeSpecName: "kube-api-access-xj7fc") pod "0f327948-557a-4479-b298-36c36ab07724" (UID: "0f327948-557a-4479-b298-36c36ab07724"). InnerVolumeSpecName "kube-api-access-xj7fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.759535 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f327948-557a-4479-b298-36c36ab07724-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.759869 5065 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f327948-557a-4479-b298-36c36ab07724-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 13:45:02 crc kubenswrapper[5065]: I1008 13:45:02.759892 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj7fc\" (UniqueName: \"kubernetes.io/projected/0f327948-557a-4479-b298-36c36ab07724-kube-api-access-xj7fc\") on node \"crc\" DevicePath \"\"" Oct 08 13:45:03 crc kubenswrapper[5065]: I1008 13:45:03.299869 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" event={"ID":"0f327948-557a-4479-b298-36c36ab07724","Type":"ContainerDied","Data":"94f0f6f6dff5e610ebedf70a660c2411d7c946b4a551bd472a3d0201767e9199"} Oct 08 13:45:03 crc kubenswrapper[5065]: I1008 13:45:03.299928 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f0f6f6dff5e610ebedf70a660c2411d7c946b4a551bd472a3d0201767e9199" Oct 08 13:45:03 crc kubenswrapper[5065]: I1008 13:45:03.299896 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl" Oct 08 13:45:09 crc kubenswrapper[5065]: I1008 13:45:09.873973 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:45:09 crc kubenswrapper[5065]: E1008 13:45:09.874706 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:45:20 crc kubenswrapper[5065]: I1008 13:45:20.873821 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:45:20 crc kubenswrapper[5065]: E1008 13:45:20.875225 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:45:27 crc kubenswrapper[5065]: I1008 13:45:27.658984 5065 scope.go:117] "RemoveContainer" containerID="5540a1f6a5acf06d57aab76547727aae20f3096f0b615cf88a2220bb824dec41" Oct 08 13:45:27 crc kubenswrapper[5065]: I1008 13:45:27.691986 5065 scope.go:117] "RemoveContainer" containerID="1140116f393d627b40f1e1c2827001066a176489665224e6cb7507310083823c" Oct 08 13:45:27 crc kubenswrapper[5065]: I1008 13:45:27.738522 5065 scope.go:117] "RemoveContainer" containerID="f73b309d85997bd26052eea883539b00795b144ef3e0730d9fbcb4739f2c945c" Oct 08 13:45:27 crc kubenswrapper[5065]: I1008 13:45:27.772610 5065 scope.go:117] "RemoveContainer" containerID="5f69ab5014e9ea4bb50f1b65d4593a168bb3d6a2c7e305250282dedb882013c5" Oct 08 13:45:27 crc kubenswrapper[5065]: I1008 13:45:27.812574 5065 scope.go:117] "RemoveContainer" containerID="51d59f6c1a32d7f9548db66179bc1cdd29b957b043fcdf8c829de549616ab078" Oct 08 13:45:27 crc kubenswrapper[5065]: I1008 13:45:27.832460 5065 scope.go:117] "RemoveContainer" containerID="3536fcb126639de1eca1e9e8fb96986742592f8949764d6a7616a3f3a55a523f" Oct 08 13:45:35 crc kubenswrapper[5065]: I1008 13:45:35.874035 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:45:35 crc kubenswrapper[5065]: E1008 13:45:35.875351 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:45:47 crc kubenswrapper[5065]: I1008 13:45:47.873268 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:45:47 crc kubenswrapper[5065]: E1008 13:45:47.874061 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:46:00 crc kubenswrapper[5065]: I1008 13:46:00.876914 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:46:00 crc kubenswrapper[5065]: E1008 13:46:00.878485 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:46:14 crc kubenswrapper[5065]: I1008 13:46:14.874455 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:46:14 crc kubenswrapper[5065]: E1008 13:46:14.875036 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:46:26 crc kubenswrapper[5065]: I1008 13:46:26.874289 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:46:26 crc kubenswrapper[5065]: E1008 13:46:26.875203 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:46:27 crc kubenswrapper[5065]: I1008 13:46:27.921641 5065 scope.go:117] "RemoveContainer" containerID="2758344b7c6842d1a78126edc64da4687c969ec597d11812d34e372c73948d04" Oct 08 13:46:27 crc kubenswrapper[5065]: I1008 13:46:27.950217 5065 scope.go:117] "RemoveContainer" containerID="e59b55badd92b9747669cea1a276f0f4749ca1574e5171035193093770c5a6ca" Oct 08 13:46:27 crc kubenswrapper[5065]: I1008 13:46:27.989272 5065 scope.go:117] "RemoveContainer" containerID="4c694dd4a31cd2734ca3f755d48b6352d8867ca981e402024f204138ea07c425" Oct 08 13:46:28 crc kubenswrapper[5065]: I1008 13:46:28.020410 5065 scope.go:117] "RemoveContainer" containerID="f00de157ca335b201eedfe40fcdec9a8b8d879af63720162be6c83a57024286e" Oct 08 13:46:28 crc kubenswrapper[5065]: I1008 13:46:28.037604 5065 scope.go:117] "RemoveContainer" containerID="14b3fa78ec0416aa6cf474683bcb75b805d9d36b7d9d4631a80e461d654dfd5a" Oct 08 13:46:40 crc kubenswrapper[5065]: I1008 13:46:40.873766 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:46:40 crc kubenswrapper[5065]: E1008 13:46:40.874243 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:46:52 crc kubenswrapper[5065]: I1008 13:46:52.873632 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:46:52 crc kubenswrapper[5065]: E1008 13:46:52.874481 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:47:03 crc kubenswrapper[5065]: I1008 13:47:03.873488 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:47:03 crc kubenswrapper[5065]: E1008 13:47:03.874337 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:47:14 crc kubenswrapper[5065]: I1008 13:47:14.878300 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:47:14 crc kubenswrapper[5065]: E1008 13:47:14.879404 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:47:29 crc kubenswrapper[5065]: I1008 13:47:29.874120 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:47:29 crc kubenswrapper[5065]: E1008 13:47:29.875000 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:47:42 crc kubenswrapper[5065]: I1008 13:47:42.873685 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:47:42 crc kubenswrapper[5065]: E1008 13:47:42.874559 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:47:56 crc kubenswrapper[5065]: I1008 13:47:56.873533 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:47:57 crc kubenswrapper[5065]: I1008 13:47:57.764602 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"1a3209578244bfb18edfdedf23fdca2dfd93d1cd90c3c5e4ce9b449816632d79"} Oct 08 13:48:28 crc kubenswrapper[5065]: I1008 13:48:28.116174 5065 scope.go:117] "RemoveContainer" containerID="f44e6eec568a60547f322bf6574c39d18c49a173d6947fb540716069df34b5e4" Oct 08 13:48:28 crc kubenswrapper[5065]: I1008 13:48:28.147357 5065 scope.go:117] "RemoveContainer" containerID="73c886d23b7fce33721236acacaf9e4620fe7fc93f491a3b244e34ad8780fb53" Oct 08 13:48:28 crc kubenswrapper[5065]: I1008 13:48:28.194940 5065 scope.go:117] "RemoveContainer" containerID="a42bdab02d5774fd25d425c24620baa5f73a1ced8bbb7e3881268c384c26f82b" Oct 08 13:50:24 crc kubenswrapper[5065]: I1008 13:50:24.375501 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:50:24 crc kubenswrapper[5065]: I1008 13:50:24.376112 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:50:54 crc kubenswrapper[5065]: I1008 13:50:54.375690 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:50:54 crc kubenswrapper[5065]: I1008 13:50:54.376287 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:51:24 crc kubenswrapper[5065]: I1008 13:51:24.375773 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:51:24 crc kubenswrapper[5065]: I1008 13:51:24.376517 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:51:24 crc kubenswrapper[5065]: I1008 13:51:24.376590 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:51:24 crc kubenswrapper[5065]: I1008 13:51:24.377718 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a3209578244bfb18edfdedf23fdca2dfd93d1cd90c3c5e4ce9b449816632d79"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:51:24 crc kubenswrapper[5065]: I1008 13:51:24.377848 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://1a3209578244bfb18edfdedf23fdca2dfd93d1cd90c3c5e4ce9b449816632d79" gracePeriod=600 Oct 08 13:51:25 crc kubenswrapper[5065]: I1008 13:51:25.475773 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="1a3209578244bfb18edfdedf23fdca2dfd93d1cd90c3c5e4ce9b449816632d79" exitCode=0 Oct 08 13:51:25 crc kubenswrapper[5065]: I1008 13:51:25.475823 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"1a3209578244bfb18edfdedf23fdca2dfd93d1cd90c3c5e4ce9b449816632d79"} Oct 08 13:51:25 crc kubenswrapper[5065]: I1008 13:51:25.476090 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020"} Oct 08 13:51:25 crc kubenswrapper[5065]: I1008 13:51:25.476116 5065 scope.go:117] "RemoveContainer" containerID="831fe5ff1097abea35bb2acbbe90ed1b187e2d1866515a8f541009743e832b0c" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.100465 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8fbvt"] Oct 08 13:52:46 crc kubenswrapper[5065]: E1008 13:52:46.101773 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f327948-557a-4479-b298-36c36ab07724" containerName="collect-profiles" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.101802 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f327948-557a-4479-b298-36c36ab07724" containerName="collect-profiles" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.102186 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f327948-557a-4479-b298-36c36ab07724" containerName="collect-profiles" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.104582 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.112053 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8fbvt"] Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.215741 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-utilities\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.215845 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8q7w\" (UniqueName: \"kubernetes.io/projected/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-kube-api-access-t8q7w\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.215875 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.317319 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-utilities\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.317385 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8q7w\" (UniqueName: \"kubernetes.io/projected/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-kube-api-access-t8q7w\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.317418 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.317922 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-utilities\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.317953 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.338357 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8q7w\" (UniqueName: \"kubernetes.io/projected/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-kube-api-access-t8q7w\") pod \"community-operators-8fbvt\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.428734 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:46 crc kubenswrapper[5065]: I1008 13:52:46.972506 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8fbvt"] Oct 08 13:52:47 crc kubenswrapper[5065]: I1008 13:52:47.186551 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fbvt" event={"ID":"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83","Type":"ContainerStarted","Data":"6152201701cf963ecffd6022ebb31279963bcf505ac5fabe8d127e1dd5b3cd32"} Oct 08 13:52:48 crc kubenswrapper[5065]: I1008 13:52:48.195852 5065 generic.go:334] "Generic (PLEG): container finished" podID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerID="fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98" exitCode=0 Oct 08 13:52:48 crc kubenswrapper[5065]: I1008 13:52:48.195915 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fbvt" event={"ID":"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83","Type":"ContainerDied","Data":"fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98"} Oct 08 13:52:48 crc kubenswrapper[5065]: I1008 13:52:48.198724 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 13:52:50 crc kubenswrapper[5065]: I1008 13:52:50.219768 5065 generic.go:334] "Generic (PLEG): container finished" podID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerID="6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7" exitCode=0 Oct 08 13:52:50 crc kubenswrapper[5065]: I1008 13:52:50.219958 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fbvt" event={"ID":"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83","Type":"ContainerDied","Data":"6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7"} Oct 08 13:52:51 crc kubenswrapper[5065]: I1008 13:52:51.233642 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fbvt" event={"ID":"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83","Type":"ContainerStarted","Data":"d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7"} Oct 08 13:52:51 crc kubenswrapper[5065]: I1008 13:52:51.253463 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8fbvt" podStartSLOduration=2.786210825 podStartE2EDuration="5.25344531s" podCreationTimestamp="2025-10-08 13:52:46 +0000 UTC" firstStartedPulling="2025-10-08 13:52:48.198374622 +0000 UTC m=+2069.975756379" lastFinishedPulling="2025-10-08 13:52:50.665609097 +0000 UTC m=+2072.442990864" observedRunningTime="2025-10-08 13:52:51.252856764 +0000 UTC m=+2073.030238561" watchObservedRunningTime="2025-10-08 13:52:51.25344531 +0000 UTC m=+2073.030827067" Oct 08 13:52:56 crc kubenswrapper[5065]: I1008 13:52:56.429613 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:56 crc kubenswrapper[5065]: I1008 13:52:56.429733 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:56 crc kubenswrapper[5065]: I1008 13:52:56.490138 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.161819 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xwpsw"] Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.163395 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.180073 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwpsw"] Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.306991 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-utilities\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.307120 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxt7\" (UniqueName: \"kubernetes.io/projected/3431769e-3336-4221-8d8e-76c75774491d-kube-api-access-7nxt7\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.307155 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-catalog-content\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.321824 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.409125 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-utilities\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.409242 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nxt7\" (UniqueName: \"kubernetes.io/projected/3431769e-3336-4221-8d8e-76c75774491d-kube-api-access-7nxt7\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.409269 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-catalog-content\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.409689 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-utilities\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.409719 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-catalog-content\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.450890 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nxt7\" (UniqueName: \"kubernetes.io/projected/3431769e-3336-4221-8d8e-76c75774491d-kube-api-access-7nxt7\") pod \"certified-operators-xwpsw\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.483516 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:52:57 crc kubenswrapper[5065]: I1008 13:52:57.944649 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwpsw"] Oct 08 13:52:58 crc kubenswrapper[5065]: I1008 13:52:58.287233 5065 generic.go:334] "Generic (PLEG): container finished" podID="3431769e-3336-4221-8d8e-76c75774491d" containerID="84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596" exitCode=0 Oct 08 13:52:58 crc kubenswrapper[5065]: I1008 13:52:58.287346 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwpsw" event={"ID":"3431769e-3336-4221-8d8e-76c75774491d","Type":"ContainerDied","Data":"84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596"} Oct 08 13:52:58 crc kubenswrapper[5065]: I1008 13:52:58.287476 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwpsw" event={"ID":"3431769e-3336-4221-8d8e-76c75774491d","Type":"ContainerStarted","Data":"fd03848c892e00c5032babed6797966312280fa0c75eafca0adb4c4260929440"} Oct 08 13:52:59 crc kubenswrapper[5065]: I1008 13:52:59.304867 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwpsw" event={"ID":"3431769e-3336-4221-8d8e-76c75774491d","Type":"ContainerStarted","Data":"5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794"} Oct 08 13:52:59 crc kubenswrapper[5065]: I1008 13:52:59.730913 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8fbvt"] Oct 08 13:52:59 crc kubenswrapper[5065]: I1008 13:52:59.731747 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8fbvt" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="registry-server" containerID="cri-o://d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7" gracePeriod=2 Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.181723 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.323223 5065 generic.go:334] "Generic (PLEG): container finished" podID="3431769e-3336-4221-8d8e-76c75774491d" containerID="5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794" exitCode=0 Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.323311 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwpsw" event={"ID":"3431769e-3336-4221-8d8e-76c75774491d","Type":"ContainerDied","Data":"5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794"} Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.329303 5065 generic.go:334] "Generic (PLEG): container finished" podID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerID="d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7" exitCode=0 Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.329345 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fbvt" event={"ID":"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83","Type":"ContainerDied","Data":"d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7"} Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.329375 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fbvt" event={"ID":"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83","Type":"ContainerDied","Data":"6152201701cf963ecffd6022ebb31279963bcf505ac5fabe8d127e1dd5b3cd32"} Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.329397 5065 scope.go:117] "RemoveContainer" containerID="d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.329542 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fbvt" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.350187 5065 scope.go:117] "RemoveContainer" containerID="6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.359282 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-utilities\") pod \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.359402 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8q7w\" (UniqueName: \"kubernetes.io/projected/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-kube-api-access-t8q7w\") pod \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.359551 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content\") pod \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.360156 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-utilities" (OuterVolumeSpecName: "utilities") pod "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" (UID: "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.365090 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-kube-api-access-t8q7w" (OuterVolumeSpecName: "kube-api-access-t8q7w") pod "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" (UID: "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83"). InnerVolumeSpecName "kube-api-access-t8q7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.373762 5065 scope.go:117] "RemoveContainer" containerID="fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.409281 5065 scope.go:117] "RemoveContainer" containerID="d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7" Oct 08 13:53:00 crc kubenswrapper[5065]: E1008 13:53:00.409736 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7\": container with ID starting with d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7 not found: ID does not exist" containerID="d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.409778 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7"} err="failed to get container status \"d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7\": rpc error: code = NotFound desc = could not find container \"d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7\": container with ID starting with d374c0def862e50b940f61ce4c2dab0d3eab2095113661ac8b7c28d96322e6b7 not found: ID does not exist" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.409805 5065 scope.go:117] "RemoveContainer" containerID="6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7" Oct 08 13:53:00 crc kubenswrapper[5065]: E1008 13:53:00.410103 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7\": container with ID starting with 6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7 not found: ID does not exist" containerID="6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.410136 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7"} err="failed to get container status \"6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7\": rpc error: code = NotFound desc = could not find container \"6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7\": container with ID starting with 6a55cb445a5389de2602625b85920a146659504503a4136a6cb256cbe0f4dfa7 not found: ID does not exist" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.410157 5065 scope.go:117] "RemoveContainer" containerID="fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98" Oct 08 13:53:00 crc kubenswrapper[5065]: E1008 13:53:00.410435 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98\": container with ID starting with fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98 not found: ID does not exist" containerID="fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.410457 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98"} err="failed to get container status \"fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98\": rpc error: code = NotFound desc = could not find container \"fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98\": container with ID starting with fb61c7d542b2ec2cfe32ae1532a1787c1dfcc3f6f0fa40f23c430ff0f5acac98 not found: ID does not exist" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.460951 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.460994 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8q7w\" (UniqueName: \"kubernetes.io/projected/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-kube-api-access-t8q7w\") on node \"crc\" DevicePath \"\"" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.662837 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" (UID: "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.663030 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content\") pod \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\" (UID: \"d962b1f5-d084-4aee-8bd5-bd1d3acbfa83\") " Oct 08 13:53:00 crc kubenswrapper[5065]: W1008 13:53:00.663138 5065 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83/volumes/kubernetes.io~empty-dir/catalog-content Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.663161 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" (UID: "d962b1f5-d084-4aee-8bd5-bd1d3acbfa83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.663683 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.947904 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8fbvt"] Oct 08 13:53:00 crc kubenswrapper[5065]: I1008 13:53:00.962617 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8fbvt"] Oct 08 13:53:01 crc kubenswrapper[5065]: I1008 13:53:01.340296 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwpsw" event={"ID":"3431769e-3336-4221-8d8e-76c75774491d","Type":"ContainerStarted","Data":"3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013"} Oct 08 13:53:01 crc kubenswrapper[5065]: I1008 13:53:01.363019 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xwpsw" podStartSLOduration=1.8991205340000001 podStartE2EDuration="4.363001396s" podCreationTimestamp="2025-10-08 13:52:57 +0000 UTC" firstStartedPulling="2025-10-08 13:52:58.315895039 +0000 UTC m=+2080.093276796" lastFinishedPulling="2025-10-08 13:53:00.779775891 +0000 UTC m=+2082.557157658" observedRunningTime="2025-10-08 13:53:01.360450335 +0000 UTC m=+2083.137832092" watchObservedRunningTime="2025-10-08 13:53:01.363001396 +0000 UTC m=+2083.140383153" Oct 08 13:53:02 crc kubenswrapper[5065]: I1008 13:53:02.887357 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" path="/var/lib/kubelet/pods/d962b1f5-d084-4aee-8bd5-bd1d3acbfa83/volumes" Oct 08 13:53:07 crc kubenswrapper[5065]: I1008 13:53:07.483796 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:53:07 crc kubenswrapper[5065]: I1008 13:53:07.484154 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:53:07 crc kubenswrapper[5065]: I1008 13:53:07.557496 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:53:08 crc kubenswrapper[5065]: I1008 13:53:08.438900 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:53:08 crc kubenswrapper[5065]: I1008 13:53:08.494781 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwpsw"] Oct 08 13:53:10 crc kubenswrapper[5065]: I1008 13:53:10.407399 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xwpsw" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="registry-server" containerID="cri-o://3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013" gracePeriod=2 Oct 08 13:53:10 crc kubenswrapper[5065]: I1008 13:53:10.862370 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.022207 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-utilities\") pod \"3431769e-3336-4221-8d8e-76c75774491d\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.022845 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nxt7\" (UniqueName: \"kubernetes.io/projected/3431769e-3336-4221-8d8e-76c75774491d-kube-api-access-7nxt7\") pod \"3431769e-3336-4221-8d8e-76c75774491d\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.023230 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-catalog-content\") pod \"3431769e-3336-4221-8d8e-76c75774491d\" (UID: \"3431769e-3336-4221-8d8e-76c75774491d\") " Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.023502 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-utilities" (OuterVolumeSpecName: "utilities") pod "3431769e-3336-4221-8d8e-76c75774491d" (UID: "3431769e-3336-4221-8d8e-76c75774491d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.023923 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.031816 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3431769e-3336-4221-8d8e-76c75774491d-kube-api-access-7nxt7" (OuterVolumeSpecName: "kube-api-access-7nxt7") pod "3431769e-3336-4221-8d8e-76c75774491d" (UID: "3431769e-3336-4221-8d8e-76c75774491d"). InnerVolumeSpecName "kube-api-access-7nxt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.074572 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3431769e-3336-4221-8d8e-76c75774491d" (UID: "3431769e-3336-4221-8d8e-76c75774491d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.125562 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3431769e-3336-4221-8d8e-76c75774491d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.125600 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nxt7\" (UniqueName: \"kubernetes.io/projected/3431769e-3336-4221-8d8e-76c75774491d-kube-api-access-7nxt7\") on node \"crc\" DevicePath \"\"" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.419032 5065 generic.go:334] "Generic (PLEG): container finished" podID="3431769e-3336-4221-8d8e-76c75774491d" containerID="3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013" exitCode=0 Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.419094 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwpsw" event={"ID":"3431769e-3336-4221-8d8e-76c75774491d","Type":"ContainerDied","Data":"3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013"} Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.419121 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwpsw" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.420285 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwpsw" event={"ID":"3431769e-3336-4221-8d8e-76c75774491d","Type":"ContainerDied","Data":"fd03848c892e00c5032babed6797966312280fa0c75eafca0adb4c4260929440"} Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.420335 5065 scope.go:117] "RemoveContainer" containerID="3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.464243 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwpsw"] Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.467887 5065 scope.go:117] "RemoveContainer" containerID="5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.469334 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xwpsw"] Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.496633 5065 scope.go:117] "RemoveContainer" containerID="84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.529857 5065 scope.go:117] "RemoveContainer" containerID="3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013" Oct 08 13:53:11 crc kubenswrapper[5065]: E1008 13:53:11.530357 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013\": container with ID starting with 3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013 not found: ID does not exist" containerID="3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.530390 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013"} err="failed to get container status \"3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013\": rpc error: code = NotFound desc = could not find container \"3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013\": container with ID starting with 3b3a6bfe42f199407be200c021eaa677c82ee357fb07b364377d4beb838c3013 not found: ID does not exist" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.530431 5065 scope.go:117] "RemoveContainer" containerID="5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794" Oct 08 13:53:11 crc kubenswrapper[5065]: E1008 13:53:11.530687 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794\": container with ID starting with 5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794 not found: ID does not exist" containerID="5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.530715 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794"} err="failed to get container status \"5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794\": rpc error: code = NotFound desc = could not find container \"5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794\": container with ID starting with 5fa6733e54cb7b6f47721e43ef474ad9cc021cf661ad2912c88562f8027a9794 not found: ID does not exist" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.530734 5065 scope.go:117] "RemoveContainer" containerID="84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596" Oct 08 13:53:11 crc kubenswrapper[5065]: E1008 13:53:11.530978 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596\": container with ID starting with 84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596 not found: ID does not exist" containerID="84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596" Oct 08 13:53:11 crc kubenswrapper[5065]: I1008 13:53:11.530993 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596"} err="failed to get container status \"84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596\": rpc error: code = NotFound desc = could not find container \"84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596\": container with ID starting with 84941748febc78b36dabf7251664e4008551fa8605a0e03e8ad3b261dbb27596 not found: ID does not exist" Oct 08 13:53:12 crc kubenswrapper[5065]: I1008 13:53:12.885652 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3431769e-3336-4221-8d8e-76c75774491d" path="/var/lib/kubelet/pods/3431769e-3336-4221-8d8e-76c75774491d/volumes" Oct 08 13:53:54 crc kubenswrapper[5065]: I1008 13:53:54.375073 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:53:54 crc kubenswrapper[5065]: I1008 13:53:54.375687 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:54:24 crc kubenswrapper[5065]: I1008 13:54:24.375331 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:54:24 crc kubenswrapper[5065]: I1008 13:54:24.376742 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:54:54 crc kubenswrapper[5065]: I1008 13:54:54.376039 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 13:54:54 crc kubenswrapper[5065]: I1008 13:54:54.376796 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 13:54:54 crc kubenswrapper[5065]: I1008 13:54:54.376852 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 13:54:54 crc kubenswrapper[5065]: I1008 13:54:54.377596 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 13:54:54 crc kubenswrapper[5065]: I1008 13:54:54.377663 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" gracePeriod=600 Oct 08 13:54:54 crc kubenswrapper[5065]: E1008 13:54:54.528012 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:54:55 crc kubenswrapper[5065]: I1008 13:54:55.344076 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" exitCode=0 Oct 08 13:54:55 crc kubenswrapper[5065]: I1008 13:54:55.344197 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020"} Oct 08 13:54:55 crc kubenswrapper[5065]: I1008 13:54:55.344262 5065 scope.go:117] "RemoveContainer" containerID="1a3209578244bfb18edfdedf23fdca2dfd93d1cd90c3c5e4ce9b449816632d79" Oct 08 13:54:55 crc kubenswrapper[5065]: I1008 13:54:55.345756 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:54:55 crc kubenswrapper[5065]: E1008 13:54:55.346540 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:55:07 crc kubenswrapper[5065]: I1008 13:55:07.874084 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:55:07 crc kubenswrapper[5065]: E1008 13:55:07.875512 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:55:21 crc kubenswrapper[5065]: I1008 13:55:21.874890 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:55:21 crc kubenswrapper[5065]: E1008 13:55:21.879660 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:55:32 crc kubenswrapper[5065]: I1008 13:55:32.873921 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:55:32 crc kubenswrapper[5065]: E1008 13:55:32.875000 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:55:44 crc kubenswrapper[5065]: I1008 13:55:44.873454 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:55:44 crc kubenswrapper[5065]: E1008 13:55:44.874242 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:55:58 crc kubenswrapper[5065]: I1008 13:55:58.880265 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:55:58 crc kubenswrapper[5065]: E1008 13:55:58.881571 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:56:13 crc kubenswrapper[5065]: I1008 13:56:13.873589 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:56:13 crc kubenswrapper[5065]: E1008 13:56:13.874261 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.961374 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k5vrg"] Oct 08 13:56:26 crc kubenswrapper[5065]: E1008 13:56:26.962452 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="extract-content" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962469 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="extract-content" Oct 08 13:56:26 crc kubenswrapper[5065]: E1008 13:56:26.962487 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="extract-utilities" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962494 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="extract-utilities" Oct 08 13:56:26 crc kubenswrapper[5065]: E1008 13:56:26.962506 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="extract-content" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962515 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="extract-content" Oct 08 13:56:26 crc kubenswrapper[5065]: E1008 13:56:26.962537 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="extract-utilities" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962544 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="extract-utilities" Oct 08 13:56:26 crc kubenswrapper[5065]: E1008 13:56:26.962562 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="registry-server" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962569 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="registry-server" Oct 08 13:56:26 crc kubenswrapper[5065]: E1008 13:56:26.962583 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="registry-server" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962589 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="registry-server" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962748 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3431769e-3336-4221-8d8e-76c75774491d" containerName="registry-server" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.962774 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="d962b1f5-d084-4aee-8bd5-bd1d3acbfa83" containerName="registry-server" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.963975 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:26 crc kubenswrapper[5065]: I1008 13:56:26.971803 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5vrg"] Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.063339 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-utilities\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.063746 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjjg\" (UniqueName: \"kubernetes.io/projected/cb85b5ea-9756-4c9c-881c-912be177a088-kube-api-access-jvjjg\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.063892 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-catalog-content\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.165121 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjjg\" (UniqueName: \"kubernetes.io/projected/cb85b5ea-9756-4c9c-881c-912be177a088-kube-api-access-jvjjg\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.165211 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-catalog-content\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.165250 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-utilities\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.165733 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-catalog-content\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.165832 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-utilities\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.184815 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjjg\" (UniqueName: \"kubernetes.io/projected/cb85b5ea-9756-4c9c-881c-912be177a088-kube-api-access-jvjjg\") pod \"redhat-operators-k5vrg\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.280303 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:27 crc kubenswrapper[5065]: I1008 13:56:27.712288 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5vrg"] Oct 08 13:56:28 crc kubenswrapper[5065]: I1008 13:56:28.139317 5065 generic.go:334] "Generic (PLEG): container finished" podID="cb85b5ea-9756-4c9c-881c-912be177a088" containerID="c1e5b181cb1564d1799ceb1aa6fa08345d70d0bc9bdc224cf1eb9360eda177d1" exitCode=0 Oct 08 13:56:28 crc kubenswrapper[5065]: I1008 13:56:28.139408 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5vrg" event={"ID":"cb85b5ea-9756-4c9c-881c-912be177a088","Type":"ContainerDied","Data":"c1e5b181cb1564d1799ceb1aa6fa08345d70d0bc9bdc224cf1eb9360eda177d1"} Oct 08 13:56:28 crc kubenswrapper[5065]: I1008 13:56:28.139595 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5vrg" event={"ID":"cb85b5ea-9756-4c9c-881c-912be177a088","Type":"ContainerStarted","Data":"5a24d7f46c47b0ab7a381e2a8ce0eac6d2200473553b19a390c21349692b0221"} Oct 08 13:56:28 crc kubenswrapper[5065]: I1008 13:56:28.878488 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:56:28 crc kubenswrapper[5065]: E1008 13:56:28.879107 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.362189 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6q2"] Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.364564 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.379544 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6q2"] Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.494112 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-catalog-content\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.494527 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbbsr\" (UniqueName: \"kubernetes.io/projected/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-kube-api-access-xbbsr\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.494630 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-utilities\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.595492 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-catalog-content\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.595574 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbbsr\" (UniqueName: \"kubernetes.io/projected/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-kube-api-access-xbbsr\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.595599 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-utilities\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.596200 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-catalog-content\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.596215 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-utilities\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.614317 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbbsr\" (UniqueName: \"kubernetes.io/projected/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-kube-api-access-xbbsr\") pod \"redhat-marketplace-qw6q2\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.692560 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:29 crc kubenswrapper[5065]: I1008 13:56:29.911339 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6q2"] Oct 08 13:56:29 crc kubenswrapper[5065]: W1008 13:56:29.915085 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2eb8d2a_b8a5_4e22_9b31_1235dbf9acfa.slice/crio-84e6aa17ec345e494d0b5656fdeec0d6e1f2f1478ae5bf8316dd3b41343084b1 WatchSource:0}: Error finding container 84e6aa17ec345e494d0b5656fdeec0d6e1f2f1478ae5bf8316dd3b41343084b1: Status 404 returned error can't find the container with id 84e6aa17ec345e494d0b5656fdeec0d6e1f2f1478ae5bf8316dd3b41343084b1 Oct 08 13:56:30 crc kubenswrapper[5065]: I1008 13:56:30.158175 5065 generic.go:334] "Generic (PLEG): container finished" podID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerID="8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09" exitCode=0 Oct 08 13:56:30 crc kubenswrapper[5065]: I1008 13:56:30.158231 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6q2" event={"ID":"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa","Type":"ContainerDied","Data":"8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09"} Oct 08 13:56:30 crc kubenswrapper[5065]: I1008 13:56:30.158290 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6q2" event={"ID":"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa","Type":"ContainerStarted","Data":"84e6aa17ec345e494d0b5656fdeec0d6e1f2f1478ae5bf8316dd3b41343084b1"} Oct 08 13:56:30 crc kubenswrapper[5065]: I1008 13:56:30.161282 5065 generic.go:334] "Generic (PLEG): container finished" podID="cb85b5ea-9756-4c9c-881c-912be177a088" containerID="05af69e14725f5e04d4ad630d5f52c46d49e4e6b46fe1d3d346eecb125f9fb28" exitCode=0 Oct 08 13:56:30 crc kubenswrapper[5065]: I1008 13:56:30.161506 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5vrg" event={"ID":"cb85b5ea-9756-4c9c-881c-912be177a088","Type":"ContainerDied","Data":"05af69e14725f5e04d4ad630d5f52c46d49e4e6b46fe1d3d346eecb125f9fb28"} Oct 08 13:56:31 crc kubenswrapper[5065]: I1008 13:56:31.173843 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5vrg" event={"ID":"cb85b5ea-9756-4c9c-881c-912be177a088","Type":"ContainerStarted","Data":"75f818f2863b4f26543f987778c6d64bc7fd32eb454c8d387a87f269b8c1498a"} Oct 08 13:56:31 crc kubenswrapper[5065]: I1008 13:56:31.175967 5065 generic.go:334] "Generic (PLEG): container finished" podID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerID="662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1" exitCode=0 Oct 08 13:56:31 crc kubenswrapper[5065]: I1008 13:56:31.176017 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6q2" event={"ID":"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa","Type":"ContainerDied","Data":"662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1"} Oct 08 13:56:31 crc kubenswrapper[5065]: I1008 13:56:31.192910 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k5vrg" podStartSLOduration=2.7012090239999997 podStartE2EDuration="5.192891708s" podCreationTimestamp="2025-10-08 13:56:26 +0000 UTC" firstStartedPulling="2025-10-08 13:56:28.141324199 +0000 UTC m=+2289.918705956" lastFinishedPulling="2025-10-08 13:56:30.633006863 +0000 UTC m=+2292.410388640" observedRunningTime="2025-10-08 13:56:31.189057672 +0000 UTC m=+2292.966439459" watchObservedRunningTime="2025-10-08 13:56:31.192891708 +0000 UTC m=+2292.970273465" Oct 08 13:56:32 crc kubenswrapper[5065]: I1008 13:56:32.183931 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6q2" event={"ID":"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa","Type":"ContainerStarted","Data":"b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e"} Oct 08 13:56:32 crc kubenswrapper[5065]: I1008 13:56:32.204158 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qw6q2" podStartSLOduration=1.725021694 podStartE2EDuration="3.2041321s" podCreationTimestamp="2025-10-08 13:56:29 +0000 UTC" firstStartedPulling="2025-10-08 13:56:30.16011701 +0000 UTC m=+2291.937498767" lastFinishedPulling="2025-10-08 13:56:31.639227416 +0000 UTC m=+2293.416609173" observedRunningTime="2025-10-08 13:56:32.199244205 +0000 UTC m=+2293.976625962" watchObservedRunningTime="2025-10-08 13:56:32.2041321 +0000 UTC m=+2293.981513857" Oct 08 13:56:37 crc kubenswrapper[5065]: I1008 13:56:37.281049 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:37 crc kubenswrapper[5065]: I1008 13:56:37.282666 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:37 crc kubenswrapper[5065]: I1008 13:56:37.348709 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:38 crc kubenswrapper[5065]: I1008 13:56:38.338912 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:38 crc kubenswrapper[5065]: I1008 13:56:38.399768 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5vrg"] Oct 08 13:56:39 crc kubenswrapper[5065]: I1008 13:56:39.692849 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:39 crc kubenswrapper[5065]: I1008 13:56:39.692968 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:39 crc kubenswrapper[5065]: I1008 13:56:39.762629 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:40 crc kubenswrapper[5065]: I1008 13:56:40.272986 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k5vrg" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="registry-server" containerID="cri-o://75f818f2863b4f26543f987778c6d64bc7fd32eb454c8d387a87f269b8c1498a" gracePeriod=2 Oct 08 13:56:40 crc kubenswrapper[5065]: I1008 13:56:40.316452 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:40 crc kubenswrapper[5065]: I1008 13:56:40.995263 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6q2"] Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.282306 5065 generic.go:334] "Generic (PLEG): container finished" podID="cb85b5ea-9756-4c9c-881c-912be177a088" containerID="75f818f2863b4f26543f987778c6d64bc7fd32eb454c8d387a87f269b8c1498a" exitCode=0 Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.282344 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5vrg" event={"ID":"cb85b5ea-9756-4c9c-881c-912be177a088","Type":"ContainerDied","Data":"75f818f2863b4f26543f987778c6d64bc7fd32eb454c8d387a87f269b8c1498a"} Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.769868 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.881687 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-utilities\") pod \"cb85b5ea-9756-4c9c-881c-912be177a088\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.881825 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-catalog-content\") pod \"cb85b5ea-9756-4c9c-881c-912be177a088\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.881979 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvjjg\" (UniqueName: \"kubernetes.io/projected/cb85b5ea-9756-4c9c-881c-912be177a088-kube-api-access-jvjjg\") pod \"cb85b5ea-9756-4c9c-881c-912be177a088\" (UID: \"cb85b5ea-9756-4c9c-881c-912be177a088\") " Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.882755 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-utilities" (OuterVolumeSpecName: "utilities") pod "cb85b5ea-9756-4c9c-881c-912be177a088" (UID: "cb85b5ea-9756-4c9c-881c-912be177a088"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.887924 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb85b5ea-9756-4c9c-881c-912be177a088-kube-api-access-jvjjg" (OuterVolumeSpecName: "kube-api-access-jvjjg") pod "cb85b5ea-9756-4c9c-881c-912be177a088" (UID: "cb85b5ea-9756-4c9c-881c-912be177a088"). InnerVolumeSpecName "kube-api-access-jvjjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.969528 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb85b5ea-9756-4c9c-881c-912be177a088" (UID: "cb85b5ea-9756-4c9c-881c-912be177a088"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.983475 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvjjg\" (UniqueName: \"kubernetes.io/projected/cb85b5ea-9756-4c9c-881c-912be177a088-kube-api-access-jvjjg\") on node \"crc\" DevicePath \"\"" Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.983504 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:56:41 crc kubenswrapper[5065]: I1008 13:56:41.983515 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb85b5ea-9756-4c9c-881c-912be177a088-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.293458 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5vrg" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.293532 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qw6q2" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="registry-server" containerID="cri-o://b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e" gracePeriod=2 Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.293450 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5vrg" event={"ID":"cb85b5ea-9756-4c9c-881c-912be177a088","Type":"ContainerDied","Data":"5a24d7f46c47b0ab7a381e2a8ce0eac6d2200473553b19a390c21349692b0221"} Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.293631 5065 scope.go:117] "RemoveContainer" containerID="75f818f2863b4f26543f987778c6d64bc7fd32eb454c8d387a87f269b8c1498a" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.335343 5065 scope.go:117] "RemoveContainer" containerID="05af69e14725f5e04d4ad630d5f52c46d49e4e6b46fe1d3d346eecb125f9fb28" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.339778 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5vrg"] Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.349092 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k5vrg"] Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.406990 5065 scope.go:117] "RemoveContainer" containerID="c1e5b181cb1564d1799ceb1aa6fa08345d70d0bc9bdc224cf1eb9360eda177d1" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.638687 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.693860 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbbsr\" (UniqueName: \"kubernetes.io/projected/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-kube-api-access-xbbsr\") pod \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.694031 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-catalog-content\") pod \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.694113 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-utilities\") pod \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\" (UID: \"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa\") " Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.695121 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-utilities" (OuterVolumeSpecName: "utilities") pod "e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" (UID: "e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.695426 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.697939 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-kube-api-access-xbbsr" (OuterVolumeSpecName: "kube-api-access-xbbsr") pod "e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" (UID: "e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa"). InnerVolumeSpecName "kube-api-access-xbbsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.705761 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" (UID: "e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.797466 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.797536 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbbsr\" (UniqueName: \"kubernetes.io/projected/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa-kube-api-access-xbbsr\") on node \"crc\" DevicePath \"\"" Oct 08 13:56:42 crc kubenswrapper[5065]: I1008 13:56:42.882623 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" path="/var/lib/kubelet/pods/cb85b5ea-9756-4c9c-881c-912be177a088/volumes" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.302688 5065 generic.go:334] "Generic (PLEG): container finished" podID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerID="b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e" exitCode=0 Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.302737 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qw6q2" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.302740 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6q2" event={"ID":"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa","Type":"ContainerDied","Data":"b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e"} Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.302859 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qw6q2" event={"ID":"e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa","Type":"ContainerDied","Data":"84e6aa17ec345e494d0b5656fdeec0d6e1f2f1478ae5bf8316dd3b41343084b1"} Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.302888 5065 scope.go:117] "RemoveContainer" containerID="b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.325296 5065 scope.go:117] "RemoveContainer" containerID="662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.328100 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6q2"] Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.340323 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qw6q2"] Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.341675 5065 scope.go:117] "RemoveContainer" containerID="8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.372006 5065 scope.go:117] "RemoveContainer" containerID="b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e" Oct 08 13:56:43 crc kubenswrapper[5065]: E1008 13:56:43.372765 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e\": container with ID starting with b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e not found: ID does not exist" containerID="b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.372828 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e"} err="failed to get container status \"b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e\": rpc error: code = NotFound desc = could not find container \"b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e\": container with ID starting with b4ad6e3a03c0c1501c6e92632cb6fe8af26bf25bfbd928d38b6b169792c7680e not found: ID does not exist" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.372875 5065 scope.go:117] "RemoveContainer" containerID="662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1" Oct 08 13:56:43 crc kubenswrapper[5065]: E1008 13:56:43.373492 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1\": container with ID starting with 662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1 not found: ID does not exist" containerID="662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.373543 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1"} err="failed to get container status \"662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1\": rpc error: code = NotFound desc = could not find container \"662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1\": container with ID starting with 662370b83ebe6e2f0f7a3828839acb963c11a4f9091eeb439a530dcc708830d1 not found: ID does not exist" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.373579 5065 scope.go:117] "RemoveContainer" containerID="8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09" Oct 08 13:56:43 crc kubenswrapper[5065]: E1008 13:56:43.374078 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09\": container with ID starting with 8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09 not found: ID does not exist" containerID="8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.374240 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09"} err="failed to get container status \"8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09\": rpc error: code = NotFound desc = could not find container \"8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09\": container with ID starting with 8b08721133a3069e2fb9baea91cde1fde81ab0eaa5763f82771d8fd8cbdb8b09 not found: ID does not exist" Oct 08 13:56:43 crc kubenswrapper[5065]: I1008 13:56:43.873301 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:56:43 crc kubenswrapper[5065]: E1008 13:56:43.873944 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:56:44 crc kubenswrapper[5065]: I1008 13:56:44.883098 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" path="/var/lib/kubelet/pods/e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa/volumes" Oct 08 13:56:57 crc kubenswrapper[5065]: I1008 13:56:57.874538 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:56:57 crc kubenswrapper[5065]: E1008 13:56:57.875696 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:57:11 crc kubenswrapper[5065]: I1008 13:57:11.874388 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:57:11 crc kubenswrapper[5065]: E1008 13:57:11.875707 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:57:25 crc kubenswrapper[5065]: I1008 13:57:25.874055 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:57:25 crc kubenswrapper[5065]: E1008 13:57:25.874805 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:57:40 crc kubenswrapper[5065]: I1008 13:57:40.874091 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:57:40 crc kubenswrapper[5065]: E1008 13:57:40.875171 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:57:51 crc kubenswrapper[5065]: I1008 13:57:51.874260 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:57:51 crc kubenswrapper[5065]: E1008 13:57:51.875683 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:58:02 crc kubenswrapper[5065]: I1008 13:58:02.873742 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:58:02 crc kubenswrapper[5065]: E1008 13:58:02.875052 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:58:15 crc kubenswrapper[5065]: I1008 13:58:15.874090 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:58:15 crc kubenswrapper[5065]: E1008 13:58:15.875440 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:58:26 crc kubenswrapper[5065]: I1008 13:58:26.874402 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:58:26 crc kubenswrapper[5065]: E1008 13:58:26.875697 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:58:40 crc kubenswrapper[5065]: I1008 13:58:40.873515 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:58:40 crc kubenswrapper[5065]: E1008 13:58:40.874349 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:58:55 crc kubenswrapper[5065]: I1008 13:58:55.873720 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:58:55 crc kubenswrapper[5065]: E1008 13:58:55.874908 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:59:10 crc kubenswrapper[5065]: I1008 13:59:10.874739 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:59:10 crc kubenswrapper[5065]: E1008 13:59:10.875646 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:59:21 crc kubenswrapper[5065]: I1008 13:59:21.873350 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:59:21 crc kubenswrapper[5065]: E1008 13:59:21.874129 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:59:34 crc kubenswrapper[5065]: I1008 13:59:34.874539 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:59:34 crc kubenswrapper[5065]: E1008 13:59:34.875717 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 13:59:45 crc kubenswrapper[5065]: I1008 13:59:45.874015 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 13:59:45 crc kubenswrapper[5065]: E1008 13:59:45.875141 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.166905 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm"] Oct 08 14:00:00 crc kubenswrapper[5065]: E1008 14:00:00.167743 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="registry-server" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.167758 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="registry-server" Oct 08 14:00:00 crc kubenswrapper[5065]: E1008 14:00:00.167775 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="extract-content" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.167784 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="extract-content" Oct 08 14:00:00 crc kubenswrapper[5065]: E1008 14:00:00.167796 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="registry-server" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.167806 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="registry-server" Oct 08 14:00:00 crc kubenswrapper[5065]: E1008 14:00:00.167823 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="extract-utilities" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.167831 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="extract-utilities" Oct 08 14:00:00 crc kubenswrapper[5065]: E1008 14:00:00.167848 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="extract-content" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.167856 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="extract-content" Oct 08 14:00:00 crc kubenswrapper[5065]: E1008 14:00:00.167880 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="extract-utilities" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.167888 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="extract-utilities" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.168068 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2eb8d2a-b8a5-4e22-9b31-1235dbf9acfa" containerName="registry-server" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.168089 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb85b5ea-9756-4c9c-881c-912be177a088" containerName="registry-server" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.168841 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.171740 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.173242 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.175380 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm"] Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.318725 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx5rx\" (UniqueName: \"kubernetes.io/projected/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-kube-api-access-bx5rx\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.318848 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-secret-volume\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.318888 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-config-volume\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.419744 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx5rx\" (UniqueName: \"kubernetes.io/projected/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-kube-api-access-bx5rx\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.419835 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-secret-volume\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.419886 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-config-volume\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.420847 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-config-volume\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.424985 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-secret-volume\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.435372 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx5rx\" (UniqueName: \"kubernetes.io/projected/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-kube-api-access-bx5rx\") pod \"collect-profiles-29332200-4qqcm\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.594629 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:00 crc kubenswrapper[5065]: I1008 14:00:00.873893 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 14:00:01 crc kubenswrapper[5065]: W1008 14:00:01.016117 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2c2c914_8d87_4d0f_9bb0_c7c46ba4d31c.slice/crio-f6a5da73163fa14ad06f6288db2510f8ad9b2e71a7ea0dec35098a6d3ab68ad7 WatchSource:0}: Error finding container f6a5da73163fa14ad06f6288db2510f8ad9b2e71a7ea0dec35098a6d3ab68ad7: Status 404 returned error can't find the container with id f6a5da73163fa14ad06f6288db2510f8ad9b2e71a7ea0dec35098a6d3ab68ad7 Oct 08 14:00:01 crc kubenswrapper[5065]: I1008 14:00:01.016697 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm"] Oct 08 14:00:01 crc kubenswrapper[5065]: I1008 14:00:01.129861 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"d7c1d33264277a986e26aa622d36c2df8711b876f8817b24cc8a897064a5be5f"} Oct 08 14:00:01 crc kubenswrapper[5065]: I1008 14:00:01.131211 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" event={"ID":"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c","Type":"ContainerStarted","Data":"6660f71acee9e3e8025ca3a789ebd33b14a8cc76a9c0d07cfabcaa12089a15dc"} Oct 08 14:00:01 crc kubenswrapper[5065]: I1008 14:00:01.131237 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" event={"ID":"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c","Type":"ContainerStarted","Data":"f6a5da73163fa14ad06f6288db2510f8ad9b2e71a7ea0dec35098a6d3ab68ad7"} Oct 08 14:00:01 crc kubenswrapper[5065]: I1008 14:00:01.165473 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" podStartSLOduration=1.165453343 podStartE2EDuration="1.165453343s" podCreationTimestamp="2025-10-08 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:00:01.163555891 +0000 UTC m=+2502.940937668" watchObservedRunningTime="2025-10-08 14:00:01.165453343 +0000 UTC m=+2502.942835120" Oct 08 14:00:02 crc kubenswrapper[5065]: I1008 14:00:02.139354 5065 generic.go:334] "Generic (PLEG): container finished" podID="b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" containerID="6660f71acee9e3e8025ca3a789ebd33b14a8cc76a9c0d07cfabcaa12089a15dc" exitCode=0 Oct 08 14:00:02 crc kubenswrapper[5065]: I1008 14:00:02.139452 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" event={"ID":"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c","Type":"ContainerDied","Data":"6660f71acee9e3e8025ca3a789ebd33b14a8cc76a9c0d07cfabcaa12089a15dc"} Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.487990 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.634734 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx5rx\" (UniqueName: \"kubernetes.io/projected/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-kube-api-access-bx5rx\") pod \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.634789 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-config-volume\") pod \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.634877 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-secret-volume\") pod \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\" (UID: \"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c\") " Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.636226 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" (UID: "b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.643199 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-kube-api-access-bx5rx" (OuterVolumeSpecName: "kube-api-access-bx5rx") pod "b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" (UID: "b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c"). InnerVolumeSpecName "kube-api-access-bx5rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.644013 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" (UID: "b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.736261 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx5rx\" (UniqueName: \"kubernetes.io/projected/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-kube-api-access-bx5rx\") on node \"crc\" DevicePath \"\"" Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.736304 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:00:03 crc kubenswrapper[5065]: I1008 14:00:03.736316 5065 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:00:04 crc kubenswrapper[5065]: I1008 14:00:04.157259 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" event={"ID":"b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c","Type":"ContainerDied","Data":"f6a5da73163fa14ad06f6288db2510f8ad9b2e71a7ea0dec35098a6d3ab68ad7"} Oct 08 14:00:04 crc kubenswrapper[5065]: I1008 14:00:04.157302 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a5da73163fa14ad06f6288db2510f8ad9b2e71a7ea0dec35098a6d3ab68ad7" Oct 08 14:00:04 crc kubenswrapper[5065]: I1008 14:00:04.157316 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm" Oct 08 14:00:04 crc kubenswrapper[5065]: I1008 14:00:04.594481 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm"] Oct 08 14:00:04 crc kubenswrapper[5065]: I1008 14:00:04.606640 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332155-547bm"] Oct 08 14:00:04 crc kubenswrapper[5065]: I1008 14:00:04.889590 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b719c48b-49ca-4947-8e2f-77523c4360ac" path="/var/lib/kubelet/pods/b719c48b-49ca-4947-8e2f-77523c4360ac/volumes" Oct 08 14:00:28 crc kubenswrapper[5065]: I1008 14:00:28.492830 5065 scope.go:117] "RemoveContainer" containerID="dae1f068e7e23270d1784bc7ffcd34c0b203381e79de5279b4107fe2c12813dd" Oct 08 14:02:24 crc kubenswrapper[5065]: I1008 14:02:24.375476 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:02:24 crc kubenswrapper[5065]: I1008 14:02:24.376064 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:02:54 crc kubenswrapper[5065]: I1008 14:02:54.375525 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:02:54 crc kubenswrapper[5065]: I1008 14:02:54.376023 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.375456 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.376716 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.376805 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.377719 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7c1d33264277a986e26aa622d36c2df8711b876f8817b24cc8a897064a5be5f"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.377879 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://d7c1d33264277a986e26aa622d36c2df8711b876f8817b24cc8a897064a5be5f" gracePeriod=600 Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.952654 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="d7c1d33264277a986e26aa622d36c2df8711b876f8817b24cc8a897064a5be5f" exitCode=0 Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.952687 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"d7c1d33264277a986e26aa622d36c2df8711b876f8817b24cc8a897064a5be5f"} Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.952977 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d"} Oct 08 14:03:24 crc kubenswrapper[5065]: I1008 14:03:24.953002 5065 scope.go:117] "RemoveContainer" containerID="d5eb4029e358daf59e719d85ce6be17d10a364edbe8a78e1bdb35a668396a020" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.163861 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h6s5m"] Oct 08 14:03:53 crc kubenswrapper[5065]: E1008 14:03:53.165211 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" containerName="collect-profiles" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.165245 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" containerName="collect-profiles" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.165669 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" containerName="collect-profiles" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.167175 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.178191 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6s5m"] Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.336879 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszrq\" (UniqueName: \"kubernetes.io/projected/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-kube-api-access-dszrq\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.336942 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-catalog-content\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.337028 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-utilities\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.438773 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-utilities\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.438974 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dszrq\" (UniqueName: \"kubernetes.io/projected/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-kube-api-access-dszrq\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.439018 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-catalog-content\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.439612 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-utilities\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.439696 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-catalog-content\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.470435 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dszrq\" (UniqueName: \"kubernetes.io/projected/7db76d2a-3052-40ff-9e53-f6ffac1c7aa0-kube-api-access-dszrq\") pod \"certified-operators-h6s5m\" (UID: \"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0\") " pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.498962 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:03:53 crc kubenswrapper[5065]: I1008 14:03:53.988089 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6s5m"] Oct 08 14:03:54 crc kubenswrapper[5065]: I1008 14:03:54.205298 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6s5m" event={"ID":"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0","Type":"ContainerStarted","Data":"4380bac45acc56a205c21fab61d4181e770131e6c5d60c63838ff10fbbfd0d14"} Oct 08 14:03:55 crc kubenswrapper[5065]: I1008 14:03:55.220094 5065 generic.go:334] "Generic (PLEG): container finished" podID="7db76d2a-3052-40ff-9e53-f6ffac1c7aa0" containerID="feab842be99dbfa3a04cd50fa541d3a939d0655b25e7e65247f6a4bca18690a3" exitCode=0 Oct 08 14:03:55 crc kubenswrapper[5065]: I1008 14:03:55.220163 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6s5m" event={"ID":"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0","Type":"ContainerDied","Data":"feab842be99dbfa3a04cd50fa541d3a939d0655b25e7e65247f6a4bca18690a3"} Oct 08 14:03:55 crc kubenswrapper[5065]: I1008 14:03:55.223631 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:04:03 crc kubenswrapper[5065]: I1008 14:04:03.301916 5065 generic.go:334] "Generic (PLEG): container finished" podID="7db76d2a-3052-40ff-9e53-f6ffac1c7aa0" containerID="733eb940bc996f430cc319bf767ecba4c5c17045859a284c0709ee229c643ad8" exitCode=0 Oct 08 14:04:03 crc kubenswrapper[5065]: I1008 14:04:03.302032 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6s5m" event={"ID":"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0","Type":"ContainerDied","Data":"733eb940bc996f430cc319bf767ecba4c5c17045859a284c0709ee229c643ad8"} Oct 08 14:04:05 crc kubenswrapper[5065]: I1008 14:04:05.325095 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6s5m" event={"ID":"7db76d2a-3052-40ff-9e53-f6ffac1c7aa0","Type":"ContainerStarted","Data":"904f0c50f8d64d3003fc873e523888bc311cd5a32a6720417e1795ead8aabe8b"} Oct 08 14:04:13 crc kubenswrapper[5065]: I1008 14:04:13.500186 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:04:13 crc kubenswrapper[5065]: I1008 14:04:13.500939 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:04:13 crc kubenswrapper[5065]: I1008 14:04:13.543329 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:04:13 crc kubenswrapper[5065]: I1008 14:04:13.569689 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h6s5m" podStartSLOduration=11.658180282 podStartE2EDuration="20.569674673s" podCreationTimestamp="2025-10-08 14:03:53 +0000 UTC" firstStartedPulling="2025-10-08 14:03:55.223176961 +0000 UTC m=+2737.000558758" lastFinishedPulling="2025-10-08 14:04:04.134671362 +0000 UTC m=+2745.912053149" observedRunningTime="2025-10-08 14:04:05.35560108 +0000 UTC m=+2747.132982937" watchObservedRunningTime="2025-10-08 14:04:13.569674673 +0000 UTC m=+2755.347056430" Oct 08 14:04:14 crc kubenswrapper[5065]: I1008 14:04:14.481113 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h6s5m" Oct 08 14:04:14 crc kubenswrapper[5065]: I1008 14:04:14.558873 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6s5m"] Oct 08 14:04:14 crc kubenswrapper[5065]: I1008 14:04:14.637552 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwt7f"] Oct 08 14:04:14 crc kubenswrapper[5065]: I1008 14:04:14.638201 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hwt7f" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="registry-server" containerID="cri-o://ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d" gracePeriod=2 Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.128536 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.190903 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-catalog-content\") pod \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.190956 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pm79\" (UniqueName: \"kubernetes.io/projected/ee894e89-6ecf-47d8-963a-85d1ed74cdce-kube-api-access-7pm79\") pod \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.190996 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-utilities\") pod \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\" (UID: \"ee894e89-6ecf-47d8-963a-85d1ed74cdce\") " Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.191737 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-utilities" (OuterVolumeSpecName: "utilities") pod "ee894e89-6ecf-47d8-963a-85d1ed74cdce" (UID: "ee894e89-6ecf-47d8-963a-85d1ed74cdce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.196448 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee894e89-6ecf-47d8-963a-85d1ed74cdce-kube-api-access-7pm79" (OuterVolumeSpecName: "kube-api-access-7pm79") pod "ee894e89-6ecf-47d8-963a-85d1ed74cdce" (UID: "ee894e89-6ecf-47d8-963a-85d1ed74cdce"). InnerVolumeSpecName "kube-api-access-7pm79". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.233126 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee894e89-6ecf-47d8-963a-85d1ed74cdce" (UID: "ee894e89-6ecf-47d8-963a-85d1ed74cdce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.292130 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.292485 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pm79\" (UniqueName: \"kubernetes.io/projected/ee894e89-6ecf-47d8-963a-85d1ed74cdce-kube-api-access-7pm79\") on node \"crc\" DevicePath \"\"" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.292500 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee894e89-6ecf-47d8-963a-85d1ed74cdce-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.424286 5065 generic.go:334] "Generic (PLEG): container finished" podID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerID="ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d" exitCode=0 Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.424376 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwt7f" event={"ID":"ee894e89-6ecf-47d8-963a-85d1ed74cdce","Type":"ContainerDied","Data":"ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d"} Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.424454 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwt7f" event={"ID":"ee894e89-6ecf-47d8-963a-85d1ed74cdce","Type":"ContainerDied","Data":"3c41770722ad0656fa0d61e3ed723adf0187729083b69da750582c3157fd07cd"} Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.424480 5065 scope.go:117] "RemoveContainer" containerID="ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.424699 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwt7f" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.445532 5065 scope.go:117] "RemoveContainer" containerID="8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.457025 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwt7f"] Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.468605 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hwt7f"] Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.481528 5065 scope.go:117] "RemoveContainer" containerID="76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.503645 5065 scope.go:117] "RemoveContainer" containerID="ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d" Oct 08 14:04:15 crc kubenswrapper[5065]: E1008 14:04:15.504200 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d\": container with ID starting with ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d not found: ID does not exist" containerID="ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.504331 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d"} err="failed to get container status \"ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d\": rpc error: code = NotFound desc = could not find container \"ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d\": container with ID starting with ff2a993f2ea5f29307dfaf98be612b86107dcda811da93ff58a9ea89ab03206d not found: ID does not exist" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.504480 5065 scope.go:117] "RemoveContainer" containerID="8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25" Oct 08 14:04:15 crc kubenswrapper[5065]: E1008 14:04:15.504926 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25\": container with ID starting with 8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25 not found: ID does not exist" containerID="8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.504968 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25"} err="failed to get container status \"8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25\": rpc error: code = NotFound desc = could not find container \"8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25\": container with ID starting with 8f37fa1879f70a8ab8eadccd734281004c372ae3a8f121e9360f62231be24b25 not found: ID does not exist" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.505001 5065 scope.go:117] "RemoveContainer" containerID="76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154" Oct 08 14:04:15 crc kubenswrapper[5065]: E1008 14:04:15.505319 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154\": container with ID starting with 76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154 not found: ID does not exist" containerID="76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154" Oct 08 14:04:15 crc kubenswrapper[5065]: I1008 14:04:15.505439 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154"} err="failed to get container status \"76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154\": rpc error: code = NotFound desc = could not find container \"76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154\": container with ID starting with 76da2977ac25404893a98bb761b7000524f38ff2d9e30b5f5173bbb175eb6154 not found: ID does not exist" Oct 08 14:04:16 crc kubenswrapper[5065]: I1008 14:04:16.886990 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" path="/var/lib/kubelet/pods/ee894e89-6ecf-47d8-963a-85d1ed74cdce/volumes" Oct 08 14:05:24 crc kubenswrapper[5065]: I1008 14:05:24.376063 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:05:24 crc kubenswrapper[5065]: I1008 14:05:24.377628 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:05:54 crc kubenswrapper[5065]: I1008 14:05:54.375727 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:05:54 crc kubenswrapper[5065]: I1008 14:05:54.376365 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.376154 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.377067 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.377142 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.378203 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.378296 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" gracePeriod=600 Oct 08 14:06:24 crc kubenswrapper[5065]: E1008 14:06:24.509607 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.684410 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" exitCode=0 Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.684487 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d"} Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.685044 5065 scope.go:117] "RemoveContainer" containerID="d7c1d33264277a986e26aa622d36c2df8711b876f8817b24cc8a897064a5be5f" Oct 08 14:06:24 crc kubenswrapper[5065]: I1008 14:06:24.688713 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:06:24 crc kubenswrapper[5065]: E1008 14:06:24.689149 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:06:37 crc kubenswrapper[5065]: I1008 14:06:37.873613 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:06:37 crc kubenswrapper[5065]: E1008 14:06:37.874737 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:06:52 crc kubenswrapper[5065]: I1008 14:06:52.873381 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:06:52 crc kubenswrapper[5065]: E1008 14:06:52.874570 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:07:03 crc kubenswrapper[5065]: I1008 14:07:03.873559 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:07:03 crc kubenswrapper[5065]: E1008 14:07:03.874212 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.399722 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nxrld"] Oct 08 14:07:07 crc kubenswrapper[5065]: E1008 14:07:07.401884 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="extract-utilities" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.401922 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="extract-utilities" Oct 08 14:07:07 crc kubenswrapper[5065]: E1008 14:07:07.401954 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="registry-server" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.401965 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="registry-server" Oct 08 14:07:07 crc kubenswrapper[5065]: E1008 14:07:07.401992 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="extract-content" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.402005 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="extract-content" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.402236 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee894e89-6ecf-47d8-963a-85d1ed74cdce" containerName="registry-server" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.403971 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.422276 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxrld"] Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.539046 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-utilities\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.539133 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-catalog-content\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.539356 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlzh\" (UniqueName: \"kubernetes.io/projected/782b815d-b73d-4153-a65c-f79a52afc00e-kube-api-access-tvlzh\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.640788 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-catalog-content\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.640900 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlzh\" (UniqueName: \"kubernetes.io/projected/782b815d-b73d-4153-a65c-f79a52afc00e-kube-api-access-tvlzh\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.640978 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-utilities\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.641659 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-utilities\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.641655 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-catalog-content\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.663027 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlzh\" (UniqueName: \"kubernetes.io/projected/782b815d-b73d-4153-a65c-f79a52afc00e-kube-api-access-tvlzh\") pod \"redhat-marketplace-nxrld\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:07 crc kubenswrapper[5065]: I1008 14:07:07.736638 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:08 crc kubenswrapper[5065]: I1008 14:07:08.068504 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxrld"] Oct 08 14:07:09 crc kubenswrapper[5065]: I1008 14:07:09.097085 5065 generic.go:334] "Generic (PLEG): container finished" podID="782b815d-b73d-4153-a65c-f79a52afc00e" containerID="6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336" exitCode=0 Oct 08 14:07:09 crc kubenswrapper[5065]: I1008 14:07:09.097181 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxrld" event={"ID":"782b815d-b73d-4153-a65c-f79a52afc00e","Type":"ContainerDied","Data":"6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336"} Oct 08 14:07:09 crc kubenswrapper[5065]: I1008 14:07:09.097465 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxrld" event={"ID":"782b815d-b73d-4153-a65c-f79a52afc00e","Type":"ContainerStarted","Data":"a9ccbe2fb8d6ae2ab2c4fe86d6df031d96f96352945c2c330f78898a18fe30d7"} Oct 08 14:07:11 crc kubenswrapper[5065]: I1008 14:07:11.115120 5065 generic.go:334] "Generic (PLEG): container finished" podID="782b815d-b73d-4153-a65c-f79a52afc00e" containerID="aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61" exitCode=0 Oct 08 14:07:11 crc kubenswrapper[5065]: I1008 14:07:11.115192 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxrld" event={"ID":"782b815d-b73d-4153-a65c-f79a52afc00e","Type":"ContainerDied","Data":"aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61"} Oct 08 14:07:12 crc kubenswrapper[5065]: I1008 14:07:12.125493 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxrld" event={"ID":"782b815d-b73d-4153-a65c-f79a52afc00e","Type":"ContainerStarted","Data":"0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786"} Oct 08 14:07:12 crc kubenswrapper[5065]: I1008 14:07:12.154197 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nxrld" podStartSLOduration=2.5164619950000002 podStartE2EDuration="5.154172816s" podCreationTimestamp="2025-10-08 14:07:07 +0000 UTC" firstStartedPulling="2025-10-08 14:07:09.099544908 +0000 UTC m=+2930.876926695" lastFinishedPulling="2025-10-08 14:07:11.737255719 +0000 UTC m=+2933.514637516" observedRunningTime="2025-10-08 14:07:12.145776313 +0000 UTC m=+2933.923158110" watchObservedRunningTime="2025-10-08 14:07:12.154172816 +0000 UTC m=+2933.931554593" Oct 08 14:07:15 crc kubenswrapper[5065]: I1008 14:07:15.873929 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:07:15 crc kubenswrapper[5065]: E1008 14:07:15.874907 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:07:17 crc kubenswrapper[5065]: I1008 14:07:17.736876 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:17 crc kubenswrapper[5065]: I1008 14:07:17.736989 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:17 crc kubenswrapper[5065]: I1008 14:07:17.820018 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:18 crc kubenswrapper[5065]: I1008 14:07:18.278381 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:18 crc kubenswrapper[5065]: I1008 14:07:18.338078 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxrld"] Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.243508 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nxrld" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="registry-server" containerID="cri-o://0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786" gracePeriod=2 Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.662059 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.755807 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-catalog-content\") pod \"782b815d-b73d-4153-a65c-f79a52afc00e\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.755868 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-utilities\") pod \"782b815d-b73d-4153-a65c-f79a52afc00e\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.756004 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvlzh\" (UniqueName: \"kubernetes.io/projected/782b815d-b73d-4153-a65c-f79a52afc00e-kube-api-access-tvlzh\") pod \"782b815d-b73d-4153-a65c-f79a52afc00e\" (UID: \"782b815d-b73d-4153-a65c-f79a52afc00e\") " Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.757766 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-utilities" (OuterVolumeSpecName: "utilities") pod "782b815d-b73d-4153-a65c-f79a52afc00e" (UID: "782b815d-b73d-4153-a65c-f79a52afc00e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.761887 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782b815d-b73d-4153-a65c-f79a52afc00e-kube-api-access-tvlzh" (OuterVolumeSpecName: "kube-api-access-tvlzh") pod "782b815d-b73d-4153-a65c-f79a52afc00e" (UID: "782b815d-b73d-4153-a65c-f79a52afc00e"). InnerVolumeSpecName "kube-api-access-tvlzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.769578 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "782b815d-b73d-4153-a65c-f79a52afc00e" (UID: "782b815d-b73d-4153-a65c-f79a52afc00e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.857402 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.857458 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782b815d-b73d-4153-a65c-f79a52afc00e-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:07:20 crc kubenswrapper[5065]: I1008 14:07:20.857472 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvlzh\" (UniqueName: \"kubernetes.io/projected/782b815d-b73d-4153-a65c-f79a52afc00e-kube-api-access-tvlzh\") on node \"crc\" DevicePath \"\"" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.253205 5065 generic.go:334] "Generic (PLEG): container finished" podID="782b815d-b73d-4153-a65c-f79a52afc00e" containerID="0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786" exitCode=0 Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.253239 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nxrld" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.253253 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxrld" event={"ID":"782b815d-b73d-4153-a65c-f79a52afc00e","Type":"ContainerDied","Data":"0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786"} Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.255149 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nxrld" event={"ID":"782b815d-b73d-4153-a65c-f79a52afc00e","Type":"ContainerDied","Data":"a9ccbe2fb8d6ae2ab2c4fe86d6df031d96f96352945c2c330f78898a18fe30d7"} Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.255204 5065 scope.go:117] "RemoveContainer" containerID="0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.277674 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxrld"] Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.284384 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nxrld"] Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.286920 5065 scope.go:117] "RemoveContainer" containerID="aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.310143 5065 scope.go:117] "RemoveContainer" containerID="6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.339810 5065 scope.go:117] "RemoveContainer" containerID="0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786" Oct 08 14:07:21 crc kubenswrapper[5065]: E1008 14:07:21.340334 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786\": container with ID starting with 0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786 not found: ID does not exist" containerID="0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.340442 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786"} err="failed to get container status \"0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786\": rpc error: code = NotFound desc = could not find container \"0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786\": container with ID starting with 0a6e26287b340946aafc0a8fb477b4a783f4d485c053af7dc8bc4d576cfc7786 not found: ID does not exist" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.340525 5065 scope.go:117] "RemoveContainer" containerID="aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61" Oct 08 14:07:21 crc kubenswrapper[5065]: E1008 14:07:21.340886 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61\": container with ID starting with aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61 not found: ID does not exist" containerID="aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.340970 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61"} err="failed to get container status \"aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61\": rpc error: code = NotFound desc = could not find container \"aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61\": container with ID starting with aae93cf5bbc548092ca9c41dd23a02437af3e163847fa64bfebe55003ee89c61 not found: ID does not exist" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.341043 5065 scope.go:117] "RemoveContainer" containerID="6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336" Oct 08 14:07:21 crc kubenswrapper[5065]: E1008 14:07:21.342000 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336\": container with ID starting with 6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336 not found: ID does not exist" containerID="6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336" Oct 08 14:07:21 crc kubenswrapper[5065]: I1008 14:07:21.342615 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336"} err="failed to get container status \"6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336\": rpc error: code = NotFound desc = could not find container \"6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336\": container with ID starting with 6007fc2ea32d5b4dd1594a9ca78f16389fcf1a52f614c285964f5603ca38f336 not found: ID does not exist" Oct 08 14:07:22 crc kubenswrapper[5065]: I1008 14:07:22.889989 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" path="/var/lib/kubelet/pods/782b815d-b73d-4153-a65c-f79a52afc00e/volumes" Oct 08 14:07:28 crc kubenswrapper[5065]: I1008 14:07:28.881986 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:07:28 crc kubenswrapper[5065]: E1008 14:07:28.883156 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:07:42 crc kubenswrapper[5065]: I1008 14:07:42.873560 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:07:42 crc kubenswrapper[5065]: E1008 14:07:42.875968 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.102399 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hzrfx"] Oct 08 14:07:52 crc kubenswrapper[5065]: E1008 14:07:52.103413 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="extract-utilities" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.103469 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="extract-utilities" Oct 08 14:07:52 crc kubenswrapper[5065]: E1008 14:07:52.103508 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="registry-server" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.103523 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="registry-server" Oct 08 14:07:52 crc kubenswrapper[5065]: E1008 14:07:52.103554 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="extract-content" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.103567 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="extract-content" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.103880 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="782b815d-b73d-4153-a65c-f79a52afc00e" containerName="registry-server" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.105892 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.117376 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzrfx"] Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.204548 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996097d7-4c9a-4798-8e9e-b3e26b7dffda-utilities\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.204631 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996097d7-4c9a-4798-8e9e-b3e26b7dffda-catalog-content\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.204664 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94q6\" (UniqueName: \"kubernetes.io/projected/996097d7-4c9a-4798-8e9e-b3e26b7dffda-kube-api-access-j94q6\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.305844 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996097d7-4c9a-4798-8e9e-b3e26b7dffda-utilities\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.305902 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996097d7-4c9a-4798-8e9e-b3e26b7dffda-catalog-content\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.305936 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94q6\" (UniqueName: \"kubernetes.io/projected/996097d7-4c9a-4798-8e9e-b3e26b7dffda-kube-api-access-j94q6\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.306435 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996097d7-4c9a-4798-8e9e-b3e26b7dffda-catalog-content\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.306662 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996097d7-4c9a-4798-8e9e-b3e26b7dffda-utilities\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.324727 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94q6\" (UniqueName: \"kubernetes.io/projected/996097d7-4c9a-4798-8e9e-b3e26b7dffda-kube-api-access-j94q6\") pod \"redhat-operators-hzrfx\" (UID: \"996097d7-4c9a-4798-8e9e-b3e26b7dffda\") " pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.462619 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:07:52 crc kubenswrapper[5065]: I1008 14:07:52.901538 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzrfx"] Oct 08 14:07:53 crc kubenswrapper[5065]: I1008 14:07:53.577536 5065 generic.go:334] "Generic (PLEG): container finished" podID="996097d7-4c9a-4798-8e9e-b3e26b7dffda" containerID="d79386772e3df2251ff5b1e0b714183b6e3288196ee07e8751449cebac557959" exitCode=0 Oct 08 14:07:53 crc kubenswrapper[5065]: I1008 14:07:53.577605 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzrfx" event={"ID":"996097d7-4c9a-4798-8e9e-b3e26b7dffda","Type":"ContainerDied","Data":"d79386772e3df2251ff5b1e0b714183b6e3288196ee07e8751449cebac557959"} Oct 08 14:07:53 crc kubenswrapper[5065]: I1008 14:07:53.578052 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzrfx" event={"ID":"996097d7-4c9a-4798-8e9e-b3e26b7dffda","Type":"ContainerStarted","Data":"2f41bf871eddde3215d39d734e7c5dd6e68e0a43c36bd0b6659d2f42c2e1a84f"} Oct 08 14:07:56 crc kubenswrapper[5065]: I1008 14:07:56.874349 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:07:56 crc kubenswrapper[5065]: E1008 14:07:56.876047 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:08:01 crc kubenswrapper[5065]: I1008 14:08:01.655596 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzrfx" event={"ID":"996097d7-4c9a-4798-8e9e-b3e26b7dffda","Type":"ContainerStarted","Data":"bc73f0478618a8f2eeec0d4ebadc36eebca7633e016fd3dbc71edef17643a101"} Oct 08 14:08:02 crc kubenswrapper[5065]: I1008 14:08:02.667746 5065 generic.go:334] "Generic (PLEG): container finished" podID="996097d7-4c9a-4798-8e9e-b3e26b7dffda" containerID="bc73f0478618a8f2eeec0d4ebadc36eebca7633e016fd3dbc71edef17643a101" exitCode=0 Oct 08 14:08:02 crc kubenswrapper[5065]: I1008 14:08:02.667824 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzrfx" event={"ID":"996097d7-4c9a-4798-8e9e-b3e26b7dffda","Type":"ContainerDied","Data":"bc73f0478618a8f2eeec0d4ebadc36eebca7633e016fd3dbc71edef17643a101"} Oct 08 14:08:03 crc kubenswrapper[5065]: I1008 14:08:03.675585 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzrfx" event={"ID":"996097d7-4c9a-4798-8e9e-b3e26b7dffda","Type":"ContainerStarted","Data":"3324a8ae95761eb431b3b4edfd8c55c4bb821e3163767de9dd9d59e64097714a"} Oct 08 14:08:03 crc kubenswrapper[5065]: I1008 14:08:03.693993 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hzrfx" podStartSLOduration=2.09633064 podStartE2EDuration="11.693967332s" podCreationTimestamp="2025-10-08 14:07:52 +0000 UTC" firstStartedPulling="2025-10-08 14:07:53.580038101 +0000 UTC m=+2975.357419868" lastFinishedPulling="2025-10-08 14:08:03.177674773 +0000 UTC m=+2984.955056560" observedRunningTime="2025-10-08 14:08:03.691888784 +0000 UTC m=+2985.469270561" watchObservedRunningTime="2025-10-08 14:08:03.693967332 +0000 UTC m=+2985.471349109" Oct 08 14:08:11 crc kubenswrapper[5065]: I1008 14:08:11.873750 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:08:11 crc kubenswrapper[5065]: E1008 14:08:11.874390 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:08:12 crc kubenswrapper[5065]: I1008 14:08:12.463479 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:08:12 crc kubenswrapper[5065]: I1008 14:08:12.463897 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:08:12 crc kubenswrapper[5065]: I1008 14:08:12.592699 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:08:12 crc kubenswrapper[5065]: I1008 14:08:12.814502 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hzrfx" Oct 08 14:08:12 crc kubenswrapper[5065]: I1008 14:08:12.895284 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzrfx"] Oct 08 14:08:12 crc kubenswrapper[5065]: I1008 14:08:12.951340 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9jt2"] Oct 08 14:08:12 crc kubenswrapper[5065]: I1008 14:08:12.951656 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l9jt2" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="registry-server" containerID="cri-o://83d42e549bcc22c5de035294ae0b38f9adf8ce27003c7140c9af2beaf5578108" gracePeriod=2 Oct 08 14:08:14 crc kubenswrapper[5065]: I1008 14:08:14.771830 5065 generic.go:334] "Generic (PLEG): container finished" podID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerID="83d42e549bcc22c5de035294ae0b38f9adf8ce27003c7140c9af2beaf5578108" exitCode=0 Oct 08 14:08:14 crc kubenswrapper[5065]: I1008 14:08:14.771902 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9jt2" event={"ID":"b23003ae-9c21-40d9-ad7c-f92806581aa9","Type":"ContainerDied","Data":"83d42e549bcc22c5de035294ae0b38f9adf8ce27003c7140c9af2beaf5578108"} Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.024261 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.130960 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85pn\" (UniqueName: \"kubernetes.io/projected/b23003ae-9c21-40d9-ad7c-f92806581aa9-kube-api-access-t85pn\") pod \"b23003ae-9c21-40d9-ad7c-f92806581aa9\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.131014 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-catalog-content\") pod \"b23003ae-9c21-40d9-ad7c-f92806581aa9\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.131134 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-utilities\") pod \"b23003ae-9c21-40d9-ad7c-f92806581aa9\" (UID: \"b23003ae-9c21-40d9-ad7c-f92806581aa9\") " Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.131885 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-utilities" (OuterVolumeSpecName: "utilities") pod "b23003ae-9c21-40d9-ad7c-f92806581aa9" (UID: "b23003ae-9c21-40d9-ad7c-f92806581aa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.141624 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23003ae-9c21-40d9-ad7c-f92806581aa9-kube-api-access-t85pn" (OuterVolumeSpecName: "kube-api-access-t85pn") pod "b23003ae-9c21-40d9-ad7c-f92806581aa9" (UID: "b23003ae-9c21-40d9-ad7c-f92806581aa9"). InnerVolumeSpecName "kube-api-access-t85pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.208139 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b23003ae-9c21-40d9-ad7c-f92806581aa9" (UID: "b23003ae-9c21-40d9-ad7c-f92806581aa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.232723 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.233073 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t85pn\" (UniqueName: \"kubernetes.io/projected/b23003ae-9c21-40d9-ad7c-f92806581aa9-kube-api-access-t85pn\") on node \"crc\" DevicePath \"\"" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.233157 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23003ae-9c21-40d9-ad7c-f92806581aa9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.780680 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9jt2" event={"ID":"b23003ae-9c21-40d9-ad7c-f92806581aa9","Type":"ContainerDied","Data":"cee70a45d5b267b2ec61953c49260194b0c004e5aacafe9a0f7fa3eaa2c3e216"} Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.780744 5065 scope.go:117] "RemoveContainer" containerID="83d42e549bcc22c5de035294ae0b38f9adf8ce27003c7140c9af2beaf5578108" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.781368 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9jt2" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.799672 5065 scope.go:117] "RemoveContainer" containerID="120b8f30029515927a3f5a96dd83998330dab58ffd64d5cfd4f3a0f89dbe85d8" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.819778 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9jt2"] Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.822724 5065 scope.go:117] "RemoveContainer" containerID="09ca15ebba4d88814d9dbb429aa01c338f614fc506ea26cab790807dd5268131" Oct 08 14:08:15 crc kubenswrapper[5065]: I1008 14:08:15.832157 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l9jt2"] Oct 08 14:08:16 crc kubenswrapper[5065]: I1008 14:08:16.887271 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" path="/var/lib/kubelet/pods/b23003ae-9c21-40d9-ad7c-f92806581aa9/volumes" Oct 08 14:08:24 crc kubenswrapper[5065]: I1008 14:08:24.873565 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:08:24 crc kubenswrapper[5065]: E1008 14:08:24.874285 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:08:37 crc kubenswrapper[5065]: I1008 14:08:37.874365 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:08:37 crc kubenswrapper[5065]: E1008 14:08:37.875299 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:08:52 crc kubenswrapper[5065]: I1008 14:08:52.874259 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:08:52 crc kubenswrapper[5065]: E1008 14:08:52.875335 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:09:05 crc kubenswrapper[5065]: I1008 14:09:05.873914 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:09:05 crc kubenswrapper[5065]: E1008 14:09:05.874797 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:09:16 crc kubenswrapper[5065]: I1008 14:09:16.874567 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:09:16 crc kubenswrapper[5065]: E1008 14:09:16.875373 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:09:29 crc kubenswrapper[5065]: I1008 14:09:29.873220 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:09:29 crc kubenswrapper[5065]: E1008 14:09:29.875005 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:09:42 crc kubenswrapper[5065]: I1008 14:09:42.876695 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:09:42 crc kubenswrapper[5065]: E1008 14:09:42.877814 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:09:55 crc kubenswrapper[5065]: I1008 14:09:55.873669 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:09:55 crc kubenswrapper[5065]: E1008 14:09:55.874404 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:10:06 crc kubenswrapper[5065]: I1008 14:10:06.873968 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:10:06 crc kubenswrapper[5065]: E1008 14:10:06.875706 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:10:21 crc kubenswrapper[5065]: I1008 14:10:21.874313 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:10:21 crc kubenswrapper[5065]: E1008 14:10:21.876003 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:10:33 crc kubenswrapper[5065]: I1008 14:10:33.874099 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:10:33 crc kubenswrapper[5065]: E1008 14:10:33.874760 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:10:48 crc kubenswrapper[5065]: I1008 14:10:48.873842 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:10:48 crc kubenswrapper[5065]: E1008 14:10:48.874684 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:11:02 crc kubenswrapper[5065]: I1008 14:11:02.874321 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:11:02 crc kubenswrapper[5065]: E1008 14:11:02.875529 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:11:17 crc kubenswrapper[5065]: I1008 14:11:17.874170 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:11:17 crc kubenswrapper[5065]: E1008 14:11:17.875635 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.123632 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-grm5v"] Oct 08 14:11:25 crc kubenswrapper[5065]: E1008 14:11:25.124438 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="registry-server" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.124449 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="registry-server" Oct 08 14:11:25 crc kubenswrapper[5065]: E1008 14:11:25.124468 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="extract-utilities" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.124474 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="extract-utilities" Oct 08 14:11:25 crc kubenswrapper[5065]: E1008 14:11:25.124497 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="extract-content" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.124503 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="extract-content" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.124638 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23003ae-9c21-40d9-ad7c-f92806581aa9" containerName="registry-server" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.125615 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.132415 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grm5v"] Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.144313 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v8pc\" (UniqueName: \"kubernetes.io/projected/7b787b72-84d0-4b93-a850-4743fc059946-kube-api-access-6v8pc\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.144396 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-utilities\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.144509 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-catalog-content\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.244927 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-catalog-content\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.244983 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v8pc\" (UniqueName: \"kubernetes.io/projected/7b787b72-84d0-4b93-a850-4743fc059946-kube-api-access-6v8pc\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.245027 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-utilities\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.245362 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-catalog-content\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.245387 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-utilities\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.272896 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v8pc\" (UniqueName: \"kubernetes.io/projected/7b787b72-84d0-4b93-a850-4743fc059946-kube-api-access-6v8pc\") pod \"community-operators-grm5v\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.447467 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:25 crc kubenswrapper[5065]: I1008 14:11:25.735867 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grm5v"] Oct 08 14:11:26 crc kubenswrapper[5065]: I1008 14:11:26.572231 5065 generic.go:334] "Generic (PLEG): container finished" podID="7b787b72-84d0-4b93-a850-4743fc059946" containerID="e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9" exitCode=0 Oct 08 14:11:26 crc kubenswrapper[5065]: I1008 14:11:26.572309 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grm5v" event={"ID":"7b787b72-84d0-4b93-a850-4743fc059946","Type":"ContainerDied","Data":"e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9"} Oct 08 14:11:26 crc kubenswrapper[5065]: I1008 14:11:26.574039 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grm5v" event={"ID":"7b787b72-84d0-4b93-a850-4743fc059946","Type":"ContainerStarted","Data":"e48de09bc81a7e138bd7cdb99dc6df875bd64bdbcae76974eaf83d000c5fd1af"} Oct 08 14:11:26 crc kubenswrapper[5065]: I1008 14:11:26.574344 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:11:28 crc kubenswrapper[5065]: I1008 14:11:28.609928 5065 generic.go:334] "Generic (PLEG): container finished" podID="7b787b72-84d0-4b93-a850-4743fc059946" containerID="25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd" exitCode=0 Oct 08 14:11:28 crc kubenswrapper[5065]: I1008 14:11:28.610059 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grm5v" event={"ID":"7b787b72-84d0-4b93-a850-4743fc059946","Type":"ContainerDied","Data":"25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd"} Oct 08 14:11:28 crc kubenswrapper[5065]: I1008 14:11:28.878473 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:11:29 crc kubenswrapper[5065]: I1008 14:11:29.625226 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grm5v" event={"ID":"7b787b72-84d0-4b93-a850-4743fc059946","Type":"ContainerStarted","Data":"21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020"} Oct 08 14:11:29 crc kubenswrapper[5065]: I1008 14:11:29.627992 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"f3cffa30df2c92acbf07fe560d24ae221a3a9c83ee70e9cca02d3084d8c16aab"} Oct 08 14:11:29 crc kubenswrapper[5065]: I1008 14:11:29.649882 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-grm5v" podStartSLOduration=1.94934856 podStartE2EDuration="4.649863652s" podCreationTimestamp="2025-10-08 14:11:25 +0000 UTC" firstStartedPulling="2025-10-08 14:11:26.573913138 +0000 UTC m=+3188.351294935" lastFinishedPulling="2025-10-08 14:11:29.27442828 +0000 UTC m=+3191.051810027" observedRunningTime="2025-10-08 14:11:29.643742574 +0000 UTC m=+3191.421124381" watchObservedRunningTime="2025-10-08 14:11:29.649863652 +0000 UTC m=+3191.427245429" Oct 08 14:11:35 crc kubenswrapper[5065]: I1008 14:11:35.447978 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:35 crc kubenswrapper[5065]: I1008 14:11:35.449552 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:35 crc kubenswrapper[5065]: I1008 14:11:35.512847 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:35 crc kubenswrapper[5065]: I1008 14:11:35.739905 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:39 crc kubenswrapper[5065]: I1008 14:11:39.904935 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-grm5v"] Oct 08 14:11:39 crc kubenswrapper[5065]: I1008 14:11:39.905611 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-grm5v" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="registry-server" containerID="cri-o://21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020" gracePeriod=2 Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.329109 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.398130 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-utilities\") pod \"7b787b72-84d0-4b93-a850-4743fc059946\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.398194 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v8pc\" (UniqueName: \"kubernetes.io/projected/7b787b72-84d0-4b93-a850-4743fc059946-kube-api-access-6v8pc\") pod \"7b787b72-84d0-4b93-a850-4743fc059946\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.398264 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-catalog-content\") pod \"7b787b72-84d0-4b93-a850-4743fc059946\" (UID: \"7b787b72-84d0-4b93-a850-4743fc059946\") " Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.400290 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-utilities" (OuterVolumeSpecName: "utilities") pod "7b787b72-84d0-4b93-a850-4743fc059946" (UID: "7b787b72-84d0-4b93-a850-4743fc059946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.404939 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b787b72-84d0-4b93-a850-4743fc059946-kube-api-access-6v8pc" (OuterVolumeSpecName: "kube-api-access-6v8pc") pod "7b787b72-84d0-4b93-a850-4743fc059946" (UID: "7b787b72-84d0-4b93-a850-4743fc059946"). InnerVolumeSpecName "kube-api-access-6v8pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.473439 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b787b72-84d0-4b93-a850-4743fc059946" (UID: "7b787b72-84d0-4b93-a850-4743fc059946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.500004 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.500054 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v8pc\" (UniqueName: \"kubernetes.io/projected/7b787b72-84d0-4b93-a850-4743fc059946-kube-api-access-6v8pc\") on node \"crc\" DevicePath \"\"" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.500064 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b787b72-84d0-4b93-a850-4743fc059946-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.726395 5065 generic.go:334] "Generic (PLEG): container finished" podID="7b787b72-84d0-4b93-a850-4743fc059946" containerID="21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020" exitCode=0 Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.726479 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grm5v" event={"ID":"7b787b72-84d0-4b93-a850-4743fc059946","Type":"ContainerDied","Data":"21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020"} Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.726525 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grm5v" event={"ID":"7b787b72-84d0-4b93-a850-4743fc059946","Type":"ContainerDied","Data":"e48de09bc81a7e138bd7cdb99dc6df875bd64bdbcae76974eaf83d000c5fd1af"} Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.726546 5065 scope.go:117] "RemoveContainer" containerID="21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.726546 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grm5v" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.755555 5065 scope.go:117] "RemoveContainer" containerID="25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.790708 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-grm5v"] Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.797178 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-grm5v"] Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.805587 5065 scope.go:117] "RemoveContainer" containerID="e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.839712 5065 scope.go:117] "RemoveContainer" containerID="21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020" Oct 08 14:11:40 crc kubenswrapper[5065]: E1008 14:11:40.844609 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020\": container with ID starting with 21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020 not found: ID does not exist" containerID="21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.844643 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020"} err="failed to get container status \"21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020\": rpc error: code = NotFound desc = could not find container \"21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020\": container with ID starting with 21fb49cc9fbeaf45e79164b00bf728ddf24ec5529642b8aec8b01a4dacedf020 not found: ID does not exist" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.844667 5065 scope.go:117] "RemoveContainer" containerID="25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd" Oct 08 14:11:40 crc kubenswrapper[5065]: E1008 14:11:40.845895 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd\": container with ID starting with 25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd not found: ID does not exist" containerID="25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.845918 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd"} err="failed to get container status \"25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd\": rpc error: code = NotFound desc = could not find container \"25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd\": container with ID starting with 25fd71f737d1ea4d02906c08629c265615a8f69bf1aff01f22422e588b51afcd not found: ID does not exist" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.845933 5065 scope.go:117] "RemoveContainer" containerID="e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9" Oct 08 14:11:40 crc kubenswrapper[5065]: E1008 14:11:40.846461 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9\": container with ID starting with e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9 not found: ID does not exist" containerID="e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.846484 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9"} err="failed to get container status \"e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9\": rpc error: code = NotFound desc = could not find container \"e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9\": container with ID starting with e7daf8487962270ffdfa3f9fa77022349eb30d774c6c6325f73538575628d6f9 not found: ID does not exist" Oct 08 14:11:40 crc kubenswrapper[5065]: I1008 14:11:40.888641 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b787b72-84d0-4b93-a850-4743fc059946" path="/var/lib/kubelet/pods/7b787b72-84d0-4b93-a850-4743fc059946/volumes" Oct 08 14:12:04 crc kubenswrapper[5065]: E1008 14:12:04.221482 5065 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.349s" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.009915 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8pmf2"] Oct 08 14:13:54 crc kubenswrapper[5065]: E1008 14:13:54.012601 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="registry-server" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.012780 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="registry-server" Oct 08 14:13:54 crc kubenswrapper[5065]: E1008 14:13:54.012958 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="extract-content" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.013086 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="extract-content" Oct 08 14:13:54 crc kubenswrapper[5065]: E1008 14:13:54.013228 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="extract-utilities" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.013347 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="extract-utilities" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.013762 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b787b72-84d0-4b93-a850-4743fc059946" containerName="registry-server" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.015696 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.036392 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pmf2"] Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.097369 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-catalog-content\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.097443 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-utilities\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.097493 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7llmr\" (UniqueName: \"kubernetes.io/projected/9ea22d7a-2cd3-44aa-bfce-641dc2641819-kube-api-access-7llmr\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.202077 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-catalog-content\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.202155 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-utilities\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.202196 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llmr\" (UniqueName: \"kubernetes.io/projected/9ea22d7a-2cd3-44aa-bfce-641dc2641819-kube-api-access-7llmr\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.203144 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-catalog-content\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.203161 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-utilities\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.228822 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llmr\" (UniqueName: \"kubernetes.io/projected/9ea22d7a-2cd3-44aa-bfce-641dc2641819-kube-api-access-7llmr\") pod \"certified-operators-8pmf2\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.341706 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.375398 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.375473 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.609878 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pmf2"] Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.872119 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pmf2" event={"ID":"9ea22d7a-2cd3-44aa-bfce-641dc2641819","Type":"ContainerStarted","Data":"0e95dd93e25c0e0778ee7a5a2a92f79af4fad2f8b10de3951a9766e63b14d312"} Oct 08 14:13:54 crc kubenswrapper[5065]: I1008 14:13:54.872169 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pmf2" event={"ID":"9ea22d7a-2cd3-44aa-bfce-641dc2641819","Type":"ContainerStarted","Data":"3ae134dea3426104590101fb8988ef80bbe7766d32b09f2e6f4ce21b42cbfead"} Oct 08 14:13:55 crc kubenswrapper[5065]: I1008 14:13:55.881293 5065 generic.go:334] "Generic (PLEG): container finished" podID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerID="0e95dd93e25c0e0778ee7a5a2a92f79af4fad2f8b10de3951a9766e63b14d312" exitCode=0 Oct 08 14:13:55 crc kubenswrapper[5065]: I1008 14:13:55.881399 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pmf2" event={"ID":"9ea22d7a-2cd3-44aa-bfce-641dc2641819","Type":"ContainerDied","Data":"0e95dd93e25c0e0778ee7a5a2a92f79af4fad2f8b10de3951a9766e63b14d312"} Oct 08 14:13:57 crc kubenswrapper[5065]: I1008 14:13:57.902060 5065 generic.go:334] "Generic (PLEG): container finished" podID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerID="fedbd3a91031d9833f274e8ffe6784ae31e2d43f03c976e4912c0c83ead2ad38" exitCode=0 Oct 08 14:13:57 crc kubenswrapper[5065]: I1008 14:13:57.902181 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pmf2" event={"ID":"9ea22d7a-2cd3-44aa-bfce-641dc2641819","Type":"ContainerDied","Data":"fedbd3a91031d9833f274e8ffe6784ae31e2d43f03c976e4912c0c83ead2ad38"} Oct 08 14:13:59 crc kubenswrapper[5065]: I1008 14:13:59.934334 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pmf2" event={"ID":"9ea22d7a-2cd3-44aa-bfce-641dc2641819","Type":"ContainerStarted","Data":"ba01da67a6a4d7bf6242461917a8b1f59dc349fdb1ccd723419b346af4d9eb09"} Oct 08 14:13:59 crc kubenswrapper[5065]: I1008 14:13:59.958629 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8pmf2" podStartSLOduration=3.922331484 podStartE2EDuration="6.958604001s" podCreationTimestamp="2025-10-08 14:13:53 +0000 UTC" firstStartedPulling="2025-10-08 14:13:55.883311271 +0000 UTC m=+3337.660693048" lastFinishedPulling="2025-10-08 14:13:58.919583788 +0000 UTC m=+3340.696965565" observedRunningTime="2025-10-08 14:13:59.957826801 +0000 UTC m=+3341.735208558" watchObservedRunningTime="2025-10-08 14:13:59.958604001 +0000 UTC m=+3341.735985778" Oct 08 14:14:04 crc kubenswrapper[5065]: I1008 14:14:04.342351 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:14:04 crc kubenswrapper[5065]: I1008 14:14:04.342888 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:14:04 crc kubenswrapper[5065]: I1008 14:14:04.386439 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:14:05 crc kubenswrapper[5065]: I1008 14:14:05.055726 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:14:05 crc kubenswrapper[5065]: I1008 14:14:05.118940 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pmf2"] Oct 08 14:14:06 crc kubenswrapper[5065]: I1008 14:14:06.991800 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8pmf2" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="registry-server" containerID="cri-o://ba01da67a6a4d7bf6242461917a8b1f59dc349fdb1ccd723419b346af4d9eb09" gracePeriod=2 Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.012324 5065 generic.go:334] "Generic (PLEG): container finished" podID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerID="ba01da67a6a4d7bf6242461917a8b1f59dc349fdb1ccd723419b346af4d9eb09" exitCode=0 Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.012376 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pmf2" event={"ID":"9ea22d7a-2cd3-44aa-bfce-641dc2641819","Type":"ContainerDied","Data":"ba01da67a6a4d7bf6242461917a8b1f59dc349fdb1ccd723419b346af4d9eb09"} Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.200397 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.338875 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-utilities\") pod \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.339087 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-catalog-content\") pod \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.339112 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7llmr\" (UniqueName: \"kubernetes.io/projected/9ea22d7a-2cd3-44aa-bfce-641dc2641819-kube-api-access-7llmr\") pod \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\" (UID: \"9ea22d7a-2cd3-44aa-bfce-641dc2641819\") " Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.341962 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-utilities" (OuterVolumeSpecName: "utilities") pod "9ea22d7a-2cd3-44aa-bfce-641dc2641819" (UID: "9ea22d7a-2cd3-44aa-bfce-641dc2641819"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.345623 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea22d7a-2cd3-44aa-bfce-641dc2641819-kube-api-access-7llmr" (OuterVolumeSpecName: "kube-api-access-7llmr") pod "9ea22d7a-2cd3-44aa-bfce-641dc2641819" (UID: "9ea22d7a-2cd3-44aa-bfce-641dc2641819"). InnerVolumeSpecName "kube-api-access-7llmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.441047 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7llmr\" (UniqueName: \"kubernetes.io/projected/9ea22d7a-2cd3-44aa-bfce-641dc2641819-kube-api-access-7llmr\") on node \"crc\" DevicePath \"\"" Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.442080 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.822962 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ea22d7a-2cd3-44aa-bfce-641dc2641819" (UID: "9ea22d7a-2cd3-44aa-bfce-641dc2641819"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:14:08 crc kubenswrapper[5065]: I1008 14:14:08.850089 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea22d7a-2cd3-44aa-bfce-641dc2641819-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:14:09 crc kubenswrapper[5065]: I1008 14:14:09.026458 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pmf2" event={"ID":"9ea22d7a-2cd3-44aa-bfce-641dc2641819","Type":"ContainerDied","Data":"3ae134dea3426104590101fb8988ef80bbe7766d32b09f2e6f4ce21b42cbfead"} Oct 08 14:14:09 crc kubenswrapper[5065]: I1008 14:14:09.026506 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pmf2" Oct 08 14:14:09 crc kubenswrapper[5065]: I1008 14:14:09.026526 5065 scope.go:117] "RemoveContainer" containerID="ba01da67a6a4d7bf6242461917a8b1f59dc349fdb1ccd723419b346af4d9eb09" Oct 08 14:14:09 crc kubenswrapper[5065]: I1008 14:14:09.048553 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pmf2"] Oct 08 14:14:09 crc kubenswrapper[5065]: I1008 14:14:09.052741 5065 scope.go:117] "RemoveContainer" containerID="fedbd3a91031d9833f274e8ffe6784ae31e2d43f03c976e4912c0c83ead2ad38" Oct 08 14:14:09 crc kubenswrapper[5065]: I1008 14:14:09.054777 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8pmf2"] Oct 08 14:14:09 crc kubenswrapper[5065]: I1008 14:14:09.076820 5065 scope.go:117] "RemoveContainer" containerID="0e95dd93e25c0e0778ee7a5a2a92f79af4fad2f8b10de3951a9766e63b14d312" Oct 08 14:14:10 crc kubenswrapper[5065]: I1008 14:14:10.891224 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" path="/var/lib/kubelet/pods/9ea22d7a-2cd3-44aa-bfce-641dc2641819/volumes" Oct 08 14:14:24 crc kubenswrapper[5065]: I1008 14:14:24.375562 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:14:24 crc kubenswrapper[5065]: I1008 14:14:24.376184 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:14:54 crc kubenswrapper[5065]: I1008 14:14:54.375207 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:14:54 crc kubenswrapper[5065]: I1008 14:14:54.376150 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:14:54 crc kubenswrapper[5065]: I1008 14:14:54.376210 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:14:54 crc kubenswrapper[5065]: I1008 14:14:54.376862 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3cffa30df2c92acbf07fe560d24ae221a3a9c83ee70e9cca02d3084d8c16aab"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:14:54 crc kubenswrapper[5065]: I1008 14:14:54.376922 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://f3cffa30df2c92acbf07fe560d24ae221a3a9c83ee70e9cca02d3084d8c16aab" gracePeriod=600 Oct 08 14:14:55 crc kubenswrapper[5065]: I1008 14:14:55.454087 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="f3cffa30df2c92acbf07fe560d24ae221a3a9c83ee70e9cca02d3084d8c16aab" exitCode=0 Oct 08 14:14:55 crc kubenswrapper[5065]: I1008 14:14:55.454196 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"f3cffa30df2c92acbf07fe560d24ae221a3a9c83ee70e9cca02d3084d8c16aab"} Oct 08 14:14:55 crc kubenswrapper[5065]: I1008 14:14:55.454508 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d"} Oct 08 14:14:55 crc kubenswrapper[5065]: I1008 14:14:55.454539 5065 scope.go:117] "RemoveContainer" containerID="325bdfcb82dd8035bf6b2d675aaa6470e4509d8219fd60472e30e925e1950f7d" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.198867 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc"] Oct 08 14:15:00 crc kubenswrapper[5065]: E1008 14:15:00.199529 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="extract-content" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.199542 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="extract-content" Oct 08 14:15:00 crc kubenswrapper[5065]: E1008 14:15:00.199567 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="extract-utilities" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.199573 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="extract-utilities" Oct 08 14:15:00 crc kubenswrapper[5065]: E1008 14:15:00.199593 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="registry-server" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.199600 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="registry-server" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.199749 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea22d7a-2cd3-44aa-bfce-641dc2641819" containerName="registry-server" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.200229 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.202915 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.203114 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.215387 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc"] Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.360785 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xk57\" (UniqueName: \"kubernetes.io/projected/e6164116-5040-4443-85a0-390078514904-kube-api-access-6xk57\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.361138 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6164116-5040-4443-85a0-390078514904-secret-volume\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.361177 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6164116-5040-4443-85a0-390078514904-config-volume\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.462168 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xk57\" (UniqueName: \"kubernetes.io/projected/e6164116-5040-4443-85a0-390078514904-kube-api-access-6xk57\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.462255 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6164116-5040-4443-85a0-390078514904-secret-volume\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.462306 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6164116-5040-4443-85a0-390078514904-config-volume\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.463465 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6164116-5040-4443-85a0-390078514904-config-volume\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.468665 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6164116-5040-4443-85a0-390078514904-secret-volume\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.482503 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xk57\" (UniqueName: \"kubernetes.io/projected/e6164116-5040-4443-85a0-390078514904-kube-api-access-6xk57\") pod \"collect-profiles-29332215-nrhdc\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.528176 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:00 crc kubenswrapper[5065]: I1008 14:15:00.924993 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc"] Oct 08 14:15:01 crc kubenswrapper[5065]: I1008 14:15:01.515282 5065 generic.go:334] "Generic (PLEG): container finished" podID="e6164116-5040-4443-85a0-390078514904" containerID="d96258dbc937e8f5ed56858f1e32cd96bc6bf24ef86f51a0a044493a9a70207b" exitCode=0 Oct 08 14:15:01 crc kubenswrapper[5065]: I1008 14:15:01.515336 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" event={"ID":"e6164116-5040-4443-85a0-390078514904","Type":"ContainerDied","Data":"d96258dbc937e8f5ed56858f1e32cd96bc6bf24ef86f51a0a044493a9a70207b"} Oct 08 14:15:01 crc kubenswrapper[5065]: I1008 14:15:01.515368 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" event={"ID":"e6164116-5040-4443-85a0-390078514904","Type":"ContainerStarted","Data":"a5a51bc0fb78006a172a93e5bba503ed6895f2e5c6e658ae83c8f4c28c08f02f"} Oct 08 14:15:02 crc kubenswrapper[5065]: I1008 14:15:02.875342 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:02 crc kubenswrapper[5065]: I1008 14:15:02.995772 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6164116-5040-4443-85a0-390078514904-secret-volume\") pod \"e6164116-5040-4443-85a0-390078514904\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " Oct 08 14:15:02 crc kubenswrapper[5065]: I1008 14:15:02.995867 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xk57\" (UniqueName: \"kubernetes.io/projected/e6164116-5040-4443-85a0-390078514904-kube-api-access-6xk57\") pod \"e6164116-5040-4443-85a0-390078514904\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " Oct 08 14:15:02 crc kubenswrapper[5065]: I1008 14:15:02.995969 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6164116-5040-4443-85a0-390078514904-config-volume\") pod \"e6164116-5040-4443-85a0-390078514904\" (UID: \"e6164116-5040-4443-85a0-390078514904\") " Oct 08 14:15:02 crc kubenswrapper[5065]: I1008 14:15:02.996617 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6164116-5040-4443-85a0-390078514904-config-volume" (OuterVolumeSpecName: "config-volume") pod "e6164116-5040-4443-85a0-390078514904" (UID: "e6164116-5040-4443-85a0-390078514904"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.000774 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6164116-5040-4443-85a0-390078514904-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e6164116-5040-4443-85a0-390078514904" (UID: "e6164116-5040-4443-85a0-390078514904"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.001347 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6164116-5040-4443-85a0-390078514904-kube-api-access-6xk57" (OuterVolumeSpecName: "kube-api-access-6xk57") pod "e6164116-5040-4443-85a0-390078514904" (UID: "e6164116-5040-4443-85a0-390078514904"). InnerVolumeSpecName "kube-api-access-6xk57". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.097561 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xk57\" (UniqueName: \"kubernetes.io/projected/e6164116-5040-4443-85a0-390078514904-kube-api-access-6xk57\") on node \"crc\" DevicePath \"\"" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.097620 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6164116-5040-4443-85a0-390078514904-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.097636 5065 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6164116-5040-4443-85a0-390078514904-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.530568 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" event={"ID":"e6164116-5040-4443-85a0-390078514904","Type":"ContainerDied","Data":"a5a51bc0fb78006a172a93e5bba503ed6895f2e5c6e658ae83c8f4c28c08f02f"} Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.530603 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a51bc0fb78006a172a93e5bba503ed6895f2e5c6e658ae83c8f4c28c08f02f" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.530639 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-nrhdc" Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.949529 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws"] Oct 08 14:15:03 crc kubenswrapper[5065]: I1008 14:15:03.956288 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332170-sp7ws"] Oct 08 14:15:04 crc kubenswrapper[5065]: I1008 14:15:04.885601 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b20789e-c4cd-4819-b966-b8897cc55b60" path="/var/lib/kubelet/pods/6b20789e-c4cd-4819-b966-b8897cc55b60/volumes" Oct 08 14:15:28 crc kubenswrapper[5065]: I1008 14:15:28.895123 5065 scope.go:117] "RemoveContainer" containerID="f022e75bf11a900ae58837843328c4337993d3b8745834a8226ffb04939d3695" Oct 08 14:16:54 crc kubenswrapper[5065]: I1008 14:16:54.375900 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:16:54 crc kubenswrapper[5065]: I1008 14:16:54.376731 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:17:24 crc kubenswrapper[5065]: I1008 14:17:24.375928 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:17:24 crc kubenswrapper[5065]: I1008 14:17:24.376824 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.769360 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mxt2b"] Oct 08 14:17:53 crc kubenswrapper[5065]: E1008 14:17:53.771362 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6164116-5040-4443-85a0-390078514904" containerName="collect-profiles" Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.771500 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6164116-5040-4443-85a0-390078514904" containerName="collect-profiles" Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.771752 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6164116-5040-4443-85a0-390078514904" containerName="collect-profiles" Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.773079 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.799891 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxt2b"] Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.909853 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6fz6\" (UniqueName: \"kubernetes.io/projected/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-kube-api-access-r6fz6\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.910038 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-utilities\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:53 crc kubenswrapper[5065]: I1008 14:17:53.910211 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-catalog-content\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.011516 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6fz6\" (UniqueName: \"kubernetes.io/projected/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-kube-api-access-r6fz6\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.011814 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-utilities\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.011969 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-catalog-content\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.012797 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-utilities\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.013081 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-catalog-content\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.037942 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6fz6\" (UniqueName: \"kubernetes.io/projected/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-kube-api-access-r6fz6\") pod \"redhat-marketplace-mxt2b\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.113001 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.375854 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.376255 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.376315 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.377050 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.377122 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" gracePeriod=600 Oct 08 14:17:54 crc kubenswrapper[5065]: I1008 14:17:54.572350 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxt2b"] Oct 08 14:17:55 crc kubenswrapper[5065]: E1008 14:17:55.028843 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.050341 5065 generic.go:334] "Generic (PLEG): container finished" podID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerID="0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334" exitCode=0 Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.050437 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxt2b" event={"ID":"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3","Type":"ContainerDied","Data":"0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334"} Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.050462 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxt2b" event={"ID":"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3","Type":"ContainerStarted","Data":"31cac12785840d32e4fc4d3f2a71c319e2399e244a430f6b7885aae94cdf75bc"} Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.052768 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.057314 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" exitCode=0 Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.057369 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d"} Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.057485 5065 scope.go:117] "RemoveContainer" containerID="f3cffa30df2c92acbf07fe560d24ae221a3a9c83ee70e9cca02d3084d8c16aab" Oct 08 14:17:55 crc kubenswrapper[5065]: I1008 14:17:55.058230 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:17:55 crc kubenswrapper[5065]: E1008 14:17:55.058737 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:17:56 crc kubenswrapper[5065]: I1008 14:17:56.068331 5065 generic.go:334] "Generic (PLEG): container finished" podID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerID="eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42" exitCode=0 Oct 08 14:17:56 crc kubenswrapper[5065]: I1008 14:17:56.068481 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxt2b" event={"ID":"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3","Type":"ContainerDied","Data":"eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42"} Oct 08 14:17:57 crc kubenswrapper[5065]: I1008 14:17:57.077658 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxt2b" event={"ID":"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3","Type":"ContainerStarted","Data":"9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f"} Oct 08 14:17:57 crc kubenswrapper[5065]: I1008 14:17:57.099895 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mxt2b" podStartSLOduration=2.626281883 podStartE2EDuration="4.099879592s" podCreationTimestamp="2025-10-08 14:17:53 +0000 UTC" firstStartedPulling="2025-10-08 14:17:55.052366056 +0000 UTC m=+3576.829747833" lastFinishedPulling="2025-10-08 14:17:56.525963735 +0000 UTC m=+3578.303345542" observedRunningTime="2025-10-08 14:17:57.095235175 +0000 UTC m=+3578.872616932" watchObservedRunningTime="2025-10-08 14:17:57.099879592 +0000 UTC m=+3578.877261349" Oct 08 14:18:04 crc kubenswrapper[5065]: I1008 14:18:04.113742 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:18:04 crc kubenswrapper[5065]: I1008 14:18:04.114107 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:18:04 crc kubenswrapper[5065]: I1008 14:18:04.178293 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:18:04 crc kubenswrapper[5065]: I1008 14:18:04.249490 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:18:04 crc kubenswrapper[5065]: I1008 14:18:04.421064 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxt2b"] Oct 08 14:18:06 crc kubenswrapper[5065]: I1008 14:18:06.159701 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mxt2b" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="registry-server" containerID="cri-o://9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f" gracePeriod=2 Oct 08 14:18:06 crc kubenswrapper[5065]: I1008 14:18:06.836978 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:18:06 crc kubenswrapper[5065]: I1008 14:18:06.875595 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:18:06 crc kubenswrapper[5065]: E1008 14:18:06.876137 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.035163 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-catalog-content\") pod \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.035219 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-utilities\") pod \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.035345 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6fz6\" (UniqueName: \"kubernetes.io/projected/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-kube-api-access-r6fz6\") pod \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\" (UID: \"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3\") " Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.036537 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-utilities" (OuterVolumeSpecName: "utilities") pod "a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" (UID: "a0fa6d52-3b64-4911-89cf-da9b7cf87ab3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.042513 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-kube-api-access-r6fz6" (OuterVolumeSpecName: "kube-api-access-r6fz6") pod "a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" (UID: "a0fa6d52-3b64-4911-89cf-da9b7cf87ab3"). InnerVolumeSpecName "kube-api-access-r6fz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.137425 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.137458 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6fz6\" (UniqueName: \"kubernetes.io/projected/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-kube-api-access-r6fz6\") on node \"crc\" DevicePath \"\"" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.170260 5065 generic.go:334] "Generic (PLEG): container finished" podID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerID="9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f" exitCode=0 Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.170311 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxt2b" event={"ID":"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3","Type":"ContainerDied","Data":"9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f"} Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.170329 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mxt2b" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.170345 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mxt2b" event={"ID":"a0fa6d52-3b64-4911-89cf-da9b7cf87ab3","Type":"ContainerDied","Data":"31cac12785840d32e4fc4d3f2a71c319e2399e244a430f6b7885aae94cdf75bc"} Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.170363 5065 scope.go:117] "RemoveContainer" containerID="9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.192185 5065 scope.go:117] "RemoveContainer" containerID="eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.211079 5065 scope.go:117] "RemoveContainer" containerID="0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.231042 5065 scope.go:117] "RemoveContainer" containerID="9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f" Oct 08 14:18:07 crc kubenswrapper[5065]: E1008 14:18:07.231916 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f\": container with ID starting with 9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f not found: ID does not exist" containerID="9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.231965 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f"} err="failed to get container status \"9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f\": rpc error: code = NotFound desc = could not find container \"9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f\": container with ID starting with 9c72b0a8cc6934e113b75acd1de4be084ef3b6c13df7495d12f9c4a27bc3c04f not found: ID does not exist" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.231990 5065 scope.go:117] "RemoveContainer" containerID="eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42" Oct 08 14:18:07 crc kubenswrapper[5065]: E1008 14:18:07.232750 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42\": container with ID starting with eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42 not found: ID does not exist" containerID="eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.232808 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42"} err="failed to get container status \"eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42\": rpc error: code = NotFound desc = could not find container \"eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42\": container with ID starting with eeb9386cf4a8bce6b4963df908f025c4124b5662b507724fd53d9f540b49bf42 not found: ID does not exist" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.232829 5065 scope.go:117] "RemoveContainer" containerID="0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334" Oct 08 14:18:07 crc kubenswrapper[5065]: E1008 14:18:07.233135 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334\": container with ID starting with 0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334 not found: ID does not exist" containerID="0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.233167 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334"} err="failed to get container status \"0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334\": rpc error: code = NotFound desc = could not find container \"0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334\": container with ID starting with 0d8093fff35a5fee0972a0a4d3c6744217f8c89d13a60d8f29676a8a3cb5a334 not found: ID does not exist" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.279111 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" (UID: "a0fa6d52-3b64-4911-89cf-da9b7cf87ab3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.339517 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.512862 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxt2b"] Oct 08 14:18:07 crc kubenswrapper[5065]: I1008 14:18:07.520809 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mxt2b"] Oct 08 14:18:08 crc kubenswrapper[5065]: I1008 14:18:08.889187 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" path="/var/lib/kubelet/pods/a0fa6d52-3b64-4911-89cf-da9b7cf87ab3/volumes" Oct 08 14:18:20 crc kubenswrapper[5065]: I1008 14:18:20.874272 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:18:20 crc kubenswrapper[5065]: E1008 14:18:20.875332 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.007145 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f64nr"] Oct 08 14:18:25 crc kubenswrapper[5065]: E1008 14:18:25.008088 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="registry-server" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.008110 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="registry-server" Oct 08 14:18:25 crc kubenswrapper[5065]: E1008 14:18:25.008159 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="extract-content" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.008173 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="extract-content" Oct 08 14:18:25 crc kubenswrapper[5065]: E1008 14:18:25.008188 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="extract-utilities" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.008199 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="extract-utilities" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.008471 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0fa6d52-3b64-4911-89cf-da9b7cf87ab3" containerName="registry-server" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.010074 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.026644 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-utilities\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.026796 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhsn\" (UniqueName: \"kubernetes.io/projected/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-kube-api-access-mkhsn\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.026859 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-catalog-content\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.038988 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f64nr"] Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.128086 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkhsn\" (UniqueName: \"kubernetes.io/projected/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-kube-api-access-mkhsn\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.128190 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-catalog-content\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.128333 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-utilities\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.128753 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-catalog-content\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.128813 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-utilities\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.149915 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkhsn\" (UniqueName: \"kubernetes.io/projected/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-kube-api-access-mkhsn\") pod \"redhat-operators-f64nr\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.353810 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:25 crc kubenswrapper[5065]: I1008 14:18:25.836896 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f64nr"] Oct 08 14:18:26 crc kubenswrapper[5065]: I1008 14:18:26.330813 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f64nr" event={"ID":"5894ae2c-50d8-4a07-8d5f-6ad2a673f766","Type":"ContainerStarted","Data":"84a81d50ab545909042dfee74e55f279b558fc61fe8700664de750a54abf1293"} Oct 08 14:18:26 crc kubenswrapper[5065]: E1008 14:18:26.439994 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5894ae2c_50d8_4a07_8d5f_6ad2a673f766.slice/crio-e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397.scope\": RecentStats: unable to find data in memory cache]" Oct 08 14:18:27 crc kubenswrapper[5065]: I1008 14:18:27.341919 5065 generic.go:334] "Generic (PLEG): container finished" podID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerID="e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397" exitCode=0 Oct 08 14:18:27 crc kubenswrapper[5065]: I1008 14:18:27.341988 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f64nr" event={"ID":"5894ae2c-50d8-4a07-8d5f-6ad2a673f766","Type":"ContainerDied","Data":"e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397"} Oct 08 14:18:30 crc kubenswrapper[5065]: I1008 14:18:30.366956 5065 generic.go:334] "Generic (PLEG): container finished" podID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerID="8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5" exitCode=0 Oct 08 14:18:30 crc kubenswrapper[5065]: I1008 14:18:30.367033 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f64nr" event={"ID":"5894ae2c-50d8-4a07-8d5f-6ad2a673f766","Type":"ContainerDied","Data":"8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5"} Oct 08 14:18:32 crc kubenswrapper[5065]: I1008 14:18:32.381473 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f64nr" event={"ID":"5894ae2c-50d8-4a07-8d5f-6ad2a673f766","Type":"ContainerStarted","Data":"96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562"} Oct 08 14:18:32 crc kubenswrapper[5065]: I1008 14:18:32.400635 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f64nr" podStartSLOduration=4.235882885 podStartE2EDuration="8.400614s" podCreationTimestamp="2025-10-08 14:18:24 +0000 UTC" firstStartedPulling="2025-10-08 14:18:27.343856953 +0000 UTC m=+3609.121238710" lastFinishedPulling="2025-10-08 14:18:31.508588028 +0000 UTC m=+3613.285969825" observedRunningTime="2025-10-08 14:18:32.399381466 +0000 UTC m=+3614.176763233" watchObservedRunningTime="2025-10-08 14:18:32.400614 +0000 UTC m=+3614.177995757" Oct 08 14:18:35 crc kubenswrapper[5065]: I1008 14:18:35.354395 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:35 crc kubenswrapper[5065]: I1008 14:18:35.354728 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:35 crc kubenswrapper[5065]: I1008 14:18:35.874880 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:18:35 crc kubenswrapper[5065]: E1008 14:18:35.875316 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:18:36 crc kubenswrapper[5065]: I1008 14:18:36.395954 5065 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f64nr" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="registry-server" probeResult="failure" output=< Oct 08 14:18:36 crc kubenswrapper[5065]: timeout: failed to connect service ":50051" within 1s Oct 08 14:18:36 crc kubenswrapper[5065]: > Oct 08 14:18:45 crc kubenswrapper[5065]: I1008 14:18:45.397234 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:45 crc kubenswrapper[5065]: I1008 14:18:45.440849 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:45 crc kubenswrapper[5065]: I1008 14:18:45.629360 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f64nr"] Oct 08 14:18:46 crc kubenswrapper[5065]: I1008 14:18:46.476233 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f64nr" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="registry-server" containerID="cri-o://96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562" gracePeriod=2 Oct 08 14:18:46 crc kubenswrapper[5065]: I1008 14:18:46.849567 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.038796 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-utilities\") pod \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.039145 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkhsn\" (UniqueName: \"kubernetes.io/projected/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-kube-api-access-mkhsn\") pod \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.039195 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-catalog-content\") pod \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\" (UID: \"5894ae2c-50d8-4a07-8d5f-6ad2a673f766\") " Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.039864 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-utilities" (OuterVolumeSpecName: "utilities") pod "5894ae2c-50d8-4a07-8d5f-6ad2a673f766" (UID: "5894ae2c-50d8-4a07-8d5f-6ad2a673f766"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.040577 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.044964 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-kube-api-access-mkhsn" (OuterVolumeSpecName: "kube-api-access-mkhsn") pod "5894ae2c-50d8-4a07-8d5f-6ad2a673f766" (UID: "5894ae2c-50d8-4a07-8d5f-6ad2a673f766"). InnerVolumeSpecName "kube-api-access-mkhsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.122828 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5894ae2c-50d8-4a07-8d5f-6ad2a673f766" (UID: "5894ae2c-50d8-4a07-8d5f-6ad2a673f766"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.141692 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkhsn\" (UniqueName: \"kubernetes.io/projected/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-kube-api-access-mkhsn\") on node \"crc\" DevicePath \"\"" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.141738 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5894ae2c-50d8-4a07-8d5f-6ad2a673f766-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.485605 5065 generic.go:334] "Generic (PLEG): container finished" podID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerID="96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562" exitCode=0 Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.485677 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f64nr" event={"ID":"5894ae2c-50d8-4a07-8d5f-6ad2a673f766","Type":"ContainerDied","Data":"96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562"} Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.485732 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f64nr" event={"ID":"5894ae2c-50d8-4a07-8d5f-6ad2a673f766","Type":"ContainerDied","Data":"84a81d50ab545909042dfee74e55f279b558fc61fe8700664de750a54abf1293"} Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.485744 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f64nr" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.485755 5065 scope.go:117] "RemoveContainer" containerID="96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.512912 5065 scope.go:117] "RemoveContainer" containerID="8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.514635 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f64nr"] Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.534990 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f64nr"] Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.537050 5065 scope.go:117] "RemoveContainer" containerID="e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.553314 5065 scope.go:117] "RemoveContainer" containerID="96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562" Oct 08 14:18:47 crc kubenswrapper[5065]: E1008 14:18:47.553734 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562\": container with ID starting with 96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562 not found: ID does not exist" containerID="96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.553766 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562"} err="failed to get container status \"96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562\": rpc error: code = NotFound desc = could not find container \"96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562\": container with ID starting with 96541c12393d917a0c830cba4aeb57a3284558cd1e9f83b7ae2c1327f85ac562 not found: ID does not exist" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.553786 5065 scope.go:117] "RemoveContainer" containerID="8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5" Oct 08 14:18:47 crc kubenswrapper[5065]: E1008 14:18:47.554122 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5\": container with ID starting with 8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5 not found: ID does not exist" containerID="8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.554166 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5"} err="failed to get container status \"8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5\": rpc error: code = NotFound desc = could not find container \"8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5\": container with ID starting with 8d0200e1dd1c28033d506212cfeaf81c128879e88c9a6cbff62210f720e537d5 not found: ID does not exist" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.554192 5065 scope.go:117] "RemoveContainer" containerID="e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397" Oct 08 14:18:47 crc kubenswrapper[5065]: E1008 14:18:47.554559 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397\": container with ID starting with e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397 not found: ID does not exist" containerID="e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397" Oct 08 14:18:47 crc kubenswrapper[5065]: I1008 14:18:47.554585 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397"} err="failed to get container status \"e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397\": rpc error: code = NotFound desc = could not find container \"e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397\": container with ID starting with e3ed4a061a7f926a998f5ee8dfe358f21671e05e3597bdc55fbb9b64a3aab397 not found: ID does not exist" Oct 08 14:18:48 crc kubenswrapper[5065]: I1008 14:18:48.884083 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" path="/var/lib/kubelet/pods/5894ae2c-50d8-4a07-8d5f-6ad2a673f766/volumes" Oct 08 14:18:49 crc kubenswrapper[5065]: I1008 14:18:49.873663 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:18:49 crc kubenswrapper[5065]: E1008 14:18:49.874150 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:19:04 crc kubenswrapper[5065]: I1008 14:19:04.874616 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:19:04 crc kubenswrapper[5065]: E1008 14:19:04.875285 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:19:18 crc kubenswrapper[5065]: I1008 14:19:18.882980 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:19:18 crc kubenswrapper[5065]: E1008 14:19:18.884135 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:19:31 crc kubenswrapper[5065]: I1008 14:19:31.873624 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:19:31 crc kubenswrapper[5065]: E1008 14:19:31.874339 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:19:42 crc kubenswrapper[5065]: I1008 14:19:42.873508 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:19:42 crc kubenswrapper[5065]: E1008 14:19:42.874325 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:19:56 crc kubenswrapper[5065]: I1008 14:19:56.873897 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:19:56 crc kubenswrapper[5065]: E1008 14:19:56.874621 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:20:11 crc kubenswrapper[5065]: I1008 14:20:11.874979 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:20:11 crc kubenswrapper[5065]: E1008 14:20:11.876178 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:20:26 crc kubenswrapper[5065]: I1008 14:20:26.873938 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:20:26 crc kubenswrapper[5065]: E1008 14:20:26.874818 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:20:38 crc kubenswrapper[5065]: I1008 14:20:38.888886 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:20:38 crc kubenswrapper[5065]: E1008 14:20:38.890195 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:20:53 crc kubenswrapper[5065]: I1008 14:20:53.873373 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:20:53 crc kubenswrapper[5065]: E1008 14:20:53.874549 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:21:05 crc kubenswrapper[5065]: I1008 14:21:05.874131 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:21:05 crc kubenswrapper[5065]: E1008 14:21:05.875193 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:21:19 crc kubenswrapper[5065]: I1008 14:21:19.873193 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:21:19 crc kubenswrapper[5065]: E1008 14:21:19.874007 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:21:34 crc kubenswrapper[5065]: I1008 14:21:34.874309 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:21:34 crc kubenswrapper[5065]: E1008 14:21:34.875874 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:21:46 crc kubenswrapper[5065]: I1008 14:21:46.873770 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:21:46 crc kubenswrapper[5065]: E1008 14:21:46.874572 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:21:59 crc kubenswrapper[5065]: I1008 14:21:59.873943 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:21:59 crc kubenswrapper[5065]: E1008 14:21:59.874784 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:22:11 crc kubenswrapper[5065]: I1008 14:22:11.874055 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:22:11 crc kubenswrapper[5065]: E1008 14:22:11.874691 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:22:24 crc kubenswrapper[5065]: I1008 14:22:24.873474 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:22:24 crc kubenswrapper[5065]: E1008 14:22:24.874169 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:22:39 crc kubenswrapper[5065]: I1008 14:22:39.874217 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:22:39 crc kubenswrapper[5065]: E1008 14:22:39.875134 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:22:53 crc kubenswrapper[5065]: I1008 14:22:53.873447 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:22:53 crc kubenswrapper[5065]: E1008 14:22:53.874281 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.037277 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4n9cn"] Oct 08 14:22:55 crc kubenswrapper[5065]: E1008 14:22:55.038650 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="extract-utilities" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.038760 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="extract-utilities" Oct 08 14:22:55 crc kubenswrapper[5065]: E1008 14:22:55.038886 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="registry-server" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.038967 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="registry-server" Oct 08 14:22:55 crc kubenswrapper[5065]: E1008 14:22:55.039061 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="extract-content" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.039135 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="extract-content" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.039379 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="5894ae2c-50d8-4a07-8d5f-6ad2a673f766" containerName="registry-server" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.040978 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.054890 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4n9cn"] Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.145473 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-catalog-content\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.145532 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgcf\" (UniqueName: \"kubernetes.io/projected/8bf55d11-f6f4-4391-ae35-ae0a28dda351-kube-api-access-qlgcf\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.146004 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-utilities\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.247401 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-utilities\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.247576 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-catalog-content\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.247621 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlgcf\" (UniqueName: \"kubernetes.io/projected/8bf55d11-f6f4-4391-ae35-ae0a28dda351-kube-api-access-qlgcf\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.247989 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-utilities\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.248452 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-catalog-content\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.265547 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlgcf\" (UniqueName: \"kubernetes.io/projected/8bf55d11-f6f4-4391-ae35-ae0a28dda351-kube-api-access-qlgcf\") pod \"community-operators-4n9cn\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.360384 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:22:55 crc kubenswrapper[5065]: I1008 14:22:55.869031 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4n9cn"] Oct 08 14:22:56 crc kubenswrapper[5065]: I1008 14:22:56.494202 5065 generic.go:334] "Generic (PLEG): container finished" podID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerID="2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e" exitCode=0 Oct 08 14:22:56 crc kubenswrapper[5065]: I1008 14:22:56.494374 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n9cn" event={"ID":"8bf55d11-f6f4-4391-ae35-ae0a28dda351","Type":"ContainerDied","Data":"2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e"} Oct 08 14:22:56 crc kubenswrapper[5065]: I1008 14:22:56.494538 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n9cn" event={"ID":"8bf55d11-f6f4-4391-ae35-ae0a28dda351","Type":"ContainerStarted","Data":"08994e0dfe4047b27dc8765c9e921385106bcd1b8256e861c1becc15809e893d"} Oct 08 14:22:56 crc kubenswrapper[5065]: I1008 14:22:56.498165 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:22:58 crc kubenswrapper[5065]: I1008 14:22:58.513093 5065 generic.go:334] "Generic (PLEG): container finished" podID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerID="fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390" exitCode=0 Oct 08 14:22:58 crc kubenswrapper[5065]: I1008 14:22:58.513163 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n9cn" event={"ID":"8bf55d11-f6f4-4391-ae35-ae0a28dda351","Type":"ContainerDied","Data":"fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390"} Oct 08 14:22:59 crc kubenswrapper[5065]: I1008 14:22:59.520569 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n9cn" event={"ID":"8bf55d11-f6f4-4391-ae35-ae0a28dda351","Type":"ContainerStarted","Data":"dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465"} Oct 08 14:22:59 crc kubenswrapper[5065]: I1008 14:22:59.544257 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4n9cn" podStartSLOduration=2.107570022 podStartE2EDuration="4.544239321s" podCreationTimestamp="2025-10-08 14:22:55 +0000 UTC" firstStartedPulling="2025-10-08 14:22:56.497888718 +0000 UTC m=+3878.275270475" lastFinishedPulling="2025-10-08 14:22:58.934558017 +0000 UTC m=+3880.711939774" observedRunningTime="2025-10-08 14:22:59.53980719 +0000 UTC m=+3881.317188957" watchObservedRunningTime="2025-10-08 14:22:59.544239321 +0000 UTC m=+3881.321621078" Oct 08 14:23:05 crc kubenswrapper[5065]: I1008 14:23:05.360683 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:23:05 crc kubenswrapper[5065]: I1008 14:23:05.361345 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:23:05 crc kubenswrapper[5065]: I1008 14:23:05.399721 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:23:05 crc kubenswrapper[5065]: I1008 14:23:05.614534 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:23:05 crc kubenswrapper[5065]: I1008 14:23:05.664208 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4n9cn"] Oct 08 14:23:07 crc kubenswrapper[5065]: I1008 14:23:07.577010 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4n9cn" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="registry-server" containerID="cri-o://dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465" gracePeriod=2 Oct 08 14:23:07 crc kubenswrapper[5065]: I1008 14:23:07.874119 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.006059 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.119996 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-utilities\") pod \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.120103 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgcf\" (UniqueName: \"kubernetes.io/projected/8bf55d11-f6f4-4391-ae35-ae0a28dda351-kube-api-access-qlgcf\") pod \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.120244 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-catalog-content\") pod \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\" (UID: \"8bf55d11-f6f4-4391-ae35-ae0a28dda351\") " Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.121209 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-utilities" (OuterVolumeSpecName: "utilities") pod "8bf55d11-f6f4-4391-ae35-ae0a28dda351" (UID: "8bf55d11-f6f4-4391-ae35-ae0a28dda351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.127790 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf55d11-f6f4-4391-ae35-ae0a28dda351-kube-api-access-qlgcf" (OuterVolumeSpecName: "kube-api-access-qlgcf") pod "8bf55d11-f6f4-4391-ae35-ae0a28dda351" (UID: "8bf55d11-f6f4-4391-ae35-ae0a28dda351"). InnerVolumeSpecName "kube-api-access-qlgcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.179977 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bf55d11-f6f4-4391-ae35-ae0a28dda351" (UID: "8bf55d11-f6f4-4391-ae35-ae0a28dda351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.221840 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.222061 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf55d11-f6f4-4391-ae35-ae0a28dda351-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.222074 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlgcf\" (UniqueName: \"kubernetes.io/projected/8bf55d11-f6f4-4391-ae35-ae0a28dda351-kube-api-access-qlgcf\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.587391 5065 generic.go:334] "Generic (PLEG): container finished" podID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerID="dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465" exitCode=0 Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.587557 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4n9cn" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.588269 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n9cn" event={"ID":"8bf55d11-f6f4-4391-ae35-ae0a28dda351","Type":"ContainerDied","Data":"dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465"} Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.588340 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n9cn" event={"ID":"8bf55d11-f6f4-4391-ae35-ae0a28dda351","Type":"ContainerDied","Data":"08994e0dfe4047b27dc8765c9e921385106bcd1b8256e861c1becc15809e893d"} Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.588386 5065 scope.go:117] "RemoveContainer" containerID="dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.594109 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"84d339a745bf200d368f262cdee08695e715cb47078b41ab3e6bc4f3de6ee03f"} Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.624005 5065 scope.go:117] "RemoveContainer" containerID="fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.633396 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4n9cn"] Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.637297 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4n9cn"] Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.679764 5065 scope.go:117] "RemoveContainer" containerID="2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.697865 5065 scope.go:117] "RemoveContainer" containerID="dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465" Oct 08 14:23:08 crc kubenswrapper[5065]: E1008 14:23:08.698594 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465\": container with ID starting with dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465 not found: ID does not exist" containerID="dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.698623 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465"} err="failed to get container status \"dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465\": rpc error: code = NotFound desc = could not find container \"dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465\": container with ID starting with dbbaeb180bae74c726ba157e20c386684e162f866df61e5337365d1898c49465 not found: ID does not exist" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.698644 5065 scope.go:117] "RemoveContainer" containerID="fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390" Oct 08 14:23:08 crc kubenswrapper[5065]: E1008 14:23:08.698986 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390\": container with ID starting with fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390 not found: ID does not exist" containerID="fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.699006 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390"} err="failed to get container status \"fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390\": rpc error: code = NotFound desc = could not find container \"fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390\": container with ID starting with fc90468156a92b6b6855496dfd3ddb56120324a133ea01a8127349222fdd0390 not found: ID does not exist" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.699021 5065 scope.go:117] "RemoveContainer" containerID="2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e" Oct 08 14:23:08 crc kubenswrapper[5065]: E1008 14:23:08.700316 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e\": container with ID starting with 2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e not found: ID does not exist" containerID="2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.700361 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e"} err="failed to get container status \"2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e\": rpc error: code = NotFound desc = could not find container \"2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e\": container with ID starting with 2515798f1f42ba84dd9717173c838e5ff63181302e3aea553e796167186ecf6e not found: ID does not exist" Oct 08 14:23:08 crc kubenswrapper[5065]: I1008 14:23:08.883946 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" path="/var/lib/kubelet/pods/8bf55d11-f6f4-4391-ae35-ae0a28dda351/volumes" Oct 08 14:24:09 crc kubenswrapper[5065]: I1008 14:24:09.990800 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h9g4q"] Oct 08 14:24:09 crc kubenswrapper[5065]: E1008 14:24:09.991644 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="extract-utilities" Oct 08 14:24:09 crc kubenswrapper[5065]: I1008 14:24:09.991657 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="extract-utilities" Oct 08 14:24:09 crc kubenswrapper[5065]: E1008 14:24:09.991680 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="extract-content" Oct 08 14:24:09 crc kubenswrapper[5065]: I1008 14:24:09.991686 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="extract-content" Oct 08 14:24:09 crc kubenswrapper[5065]: E1008 14:24:09.991705 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="registry-server" Oct 08 14:24:09 crc kubenswrapper[5065]: I1008 14:24:09.991711 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="registry-server" Oct 08 14:24:09 crc kubenswrapper[5065]: I1008 14:24:09.991841 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf55d11-f6f4-4391-ae35-ae0a28dda351" containerName="registry-server" Oct 08 14:24:09 crc kubenswrapper[5065]: I1008 14:24:09.992835 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.001721 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9g4q"] Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.174016 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-catalog-content\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.174391 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm8kh\" (UniqueName: \"kubernetes.io/projected/9728f396-7692-46ab-b713-a5a7eee1c511-kube-api-access-dm8kh\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.174573 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-utilities\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.276138 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-catalog-content\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.276250 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm8kh\" (UniqueName: \"kubernetes.io/projected/9728f396-7692-46ab-b713-a5a7eee1c511-kube-api-access-dm8kh\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.276282 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-utilities\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.276728 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-catalog-content\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.276767 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-utilities\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.298205 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm8kh\" (UniqueName: \"kubernetes.io/projected/9728f396-7692-46ab-b713-a5a7eee1c511-kube-api-access-dm8kh\") pod \"certified-operators-h9g4q\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.322026 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:10 crc kubenswrapper[5065]: I1008 14:24:10.794159 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9g4q"] Oct 08 14:24:10 crc kubenswrapper[5065]: W1008 14:24:10.806679 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9728f396_7692_46ab_b713_a5a7eee1c511.slice/crio-61cf53c725db201ad83232b421d4f7634d5689fc42049ee98e9c05911d273d85 WatchSource:0}: Error finding container 61cf53c725db201ad83232b421d4f7634d5689fc42049ee98e9c05911d273d85: Status 404 returned error can't find the container with id 61cf53c725db201ad83232b421d4f7634d5689fc42049ee98e9c05911d273d85 Oct 08 14:24:11 crc kubenswrapper[5065]: I1008 14:24:11.109942 5065 generic.go:334] "Generic (PLEG): container finished" podID="9728f396-7692-46ab-b713-a5a7eee1c511" containerID="639ae3070cdfb34c6da4979ca0cb46acf45b66a5d15ae7057f23a8668b2a966c" exitCode=0 Oct 08 14:24:11 crc kubenswrapper[5065]: I1008 14:24:11.109996 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g4q" event={"ID":"9728f396-7692-46ab-b713-a5a7eee1c511","Type":"ContainerDied","Data":"639ae3070cdfb34c6da4979ca0cb46acf45b66a5d15ae7057f23a8668b2a966c"} Oct 08 14:24:11 crc kubenswrapper[5065]: I1008 14:24:11.110024 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g4q" event={"ID":"9728f396-7692-46ab-b713-a5a7eee1c511","Type":"ContainerStarted","Data":"61cf53c725db201ad83232b421d4f7634d5689fc42049ee98e9c05911d273d85"} Oct 08 14:24:12 crc kubenswrapper[5065]: I1008 14:24:12.119582 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g4q" event={"ID":"9728f396-7692-46ab-b713-a5a7eee1c511","Type":"ContainerStarted","Data":"dead9cef996a3fb800ed964ea1a98e03749e5092a19d76c3bcc275f992ee5b5f"} Oct 08 14:24:13 crc kubenswrapper[5065]: I1008 14:24:13.138037 5065 generic.go:334] "Generic (PLEG): container finished" podID="9728f396-7692-46ab-b713-a5a7eee1c511" containerID="dead9cef996a3fb800ed964ea1a98e03749e5092a19d76c3bcc275f992ee5b5f" exitCode=0 Oct 08 14:24:13 crc kubenswrapper[5065]: I1008 14:24:13.138283 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g4q" event={"ID":"9728f396-7692-46ab-b713-a5a7eee1c511","Type":"ContainerDied","Data":"dead9cef996a3fb800ed964ea1a98e03749e5092a19d76c3bcc275f992ee5b5f"} Oct 08 14:24:14 crc kubenswrapper[5065]: I1008 14:24:14.148634 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g4q" event={"ID":"9728f396-7692-46ab-b713-a5a7eee1c511","Type":"ContainerStarted","Data":"8a39a259734ad938c35a7eb1428fb447dec9d87f7b90654de41d679bfdf670d3"} Oct 08 14:24:14 crc kubenswrapper[5065]: I1008 14:24:14.180249 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h9g4q" podStartSLOduration=2.7591390909999998 podStartE2EDuration="5.180223166s" podCreationTimestamp="2025-10-08 14:24:09 +0000 UTC" firstStartedPulling="2025-10-08 14:24:11.111524592 +0000 UTC m=+3952.888906349" lastFinishedPulling="2025-10-08 14:24:13.532608667 +0000 UTC m=+3955.309990424" observedRunningTime="2025-10-08 14:24:14.175860897 +0000 UTC m=+3955.953242664" watchObservedRunningTime="2025-10-08 14:24:14.180223166 +0000 UTC m=+3955.957604923" Oct 08 14:24:20 crc kubenswrapper[5065]: I1008 14:24:20.322745 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:20 crc kubenswrapper[5065]: I1008 14:24:20.323396 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:20 crc kubenswrapper[5065]: I1008 14:24:20.370335 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:21 crc kubenswrapper[5065]: I1008 14:24:21.267474 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:21 crc kubenswrapper[5065]: I1008 14:24:21.315500 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9g4q"] Oct 08 14:24:23 crc kubenswrapper[5065]: I1008 14:24:23.237264 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h9g4q" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="registry-server" containerID="cri-o://8a39a259734ad938c35a7eb1428fb447dec9d87f7b90654de41d679bfdf670d3" gracePeriod=2 Oct 08 14:24:24 crc kubenswrapper[5065]: I1008 14:24:24.248289 5065 generic.go:334] "Generic (PLEG): container finished" podID="9728f396-7692-46ab-b713-a5a7eee1c511" containerID="8a39a259734ad938c35a7eb1428fb447dec9d87f7b90654de41d679bfdf670d3" exitCode=0 Oct 08 14:24:24 crc kubenswrapper[5065]: I1008 14:24:24.248390 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g4q" event={"ID":"9728f396-7692-46ab-b713-a5a7eee1c511","Type":"ContainerDied","Data":"8a39a259734ad938c35a7eb1428fb447dec9d87f7b90654de41d679bfdf670d3"} Oct 08 14:24:24 crc kubenswrapper[5065]: I1008 14:24:24.940076 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.114728 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-catalog-content\") pod \"9728f396-7692-46ab-b713-a5a7eee1c511\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.114818 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-utilities\") pod \"9728f396-7692-46ab-b713-a5a7eee1c511\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.114893 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm8kh\" (UniqueName: \"kubernetes.io/projected/9728f396-7692-46ab-b713-a5a7eee1c511-kube-api-access-dm8kh\") pod \"9728f396-7692-46ab-b713-a5a7eee1c511\" (UID: \"9728f396-7692-46ab-b713-a5a7eee1c511\") " Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.115868 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-utilities" (OuterVolumeSpecName: "utilities") pod "9728f396-7692-46ab-b713-a5a7eee1c511" (UID: "9728f396-7692-46ab-b713-a5a7eee1c511"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.119591 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9728f396-7692-46ab-b713-a5a7eee1c511-kube-api-access-dm8kh" (OuterVolumeSpecName: "kube-api-access-dm8kh") pod "9728f396-7692-46ab-b713-a5a7eee1c511" (UID: "9728f396-7692-46ab-b713-a5a7eee1c511"). InnerVolumeSpecName "kube-api-access-dm8kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.179947 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9728f396-7692-46ab-b713-a5a7eee1c511" (UID: "9728f396-7692-46ab-b713-a5a7eee1c511"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.219888 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm8kh\" (UniqueName: \"kubernetes.io/projected/9728f396-7692-46ab-b713-a5a7eee1c511-kube-api-access-dm8kh\") on node \"crc\" DevicePath \"\"" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.219926 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.219937 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9728f396-7692-46ab-b713-a5a7eee1c511-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.257269 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g4q" event={"ID":"9728f396-7692-46ab-b713-a5a7eee1c511","Type":"ContainerDied","Data":"61cf53c725db201ad83232b421d4f7634d5689fc42049ee98e9c05911d273d85"} Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.257337 5065 scope.go:117] "RemoveContainer" containerID="8a39a259734ad938c35a7eb1428fb447dec9d87f7b90654de41d679bfdf670d3" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.257500 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g4q" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.305317 5065 scope.go:117] "RemoveContainer" containerID="dead9cef996a3fb800ed964ea1a98e03749e5092a19d76c3bcc275f992ee5b5f" Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.306649 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9g4q"] Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.320637 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h9g4q"] Oct 08 14:24:25 crc kubenswrapper[5065]: I1008 14:24:25.337214 5065 scope.go:117] "RemoveContainer" containerID="639ae3070cdfb34c6da4979ca0cb46acf45b66a5d15ae7057f23a8668b2a966c" Oct 08 14:24:26 crc kubenswrapper[5065]: I1008 14:24:26.886481 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" path="/var/lib/kubelet/pods/9728f396-7692-46ab-b713-a5a7eee1c511/volumes" Oct 08 14:25:24 crc kubenswrapper[5065]: I1008 14:25:24.375965 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:25:24 crc kubenswrapper[5065]: I1008 14:25:24.376661 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:25:54 crc kubenswrapper[5065]: I1008 14:25:54.374956 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:25:54 crc kubenswrapper[5065]: I1008 14:25:54.375391 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:26:24 crc kubenswrapper[5065]: I1008 14:26:24.375809 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:26:24 crc kubenswrapper[5065]: I1008 14:26:24.376494 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:26:24 crc kubenswrapper[5065]: I1008 14:26:24.376556 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:26:24 crc kubenswrapper[5065]: I1008 14:26:24.377705 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84d339a745bf200d368f262cdee08695e715cb47078b41ab3e6bc4f3de6ee03f"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:26:24 crc kubenswrapper[5065]: I1008 14:26:24.377789 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://84d339a745bf200d368f262cdee08695e715cb47078b41ab3e6bc4f3de6ee03f" gracePeriod=600 Oct 08 14:26:25 crc kubenswrapper[5065]: I1008 14:26:25.181195 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="84d339a745bf200d368f262cdee08695e715cb47078b41ab3e6bc4f3de6ee03f" exitCode=0 Oct 08 14:26:25 crc kubenswrapper[5065]: I1008 14:26:25.181281 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"84d339a745bf200d368f262cdee08695e715cb47078b41ab3e6bc4f3de6ee03f"} Oct 08 14:26:25 crc kubenswrapper[5065]: I1008 14:26:25.181592 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268"} Oct 08 14:26:25 crc kubenswrapper[5065]: I1008 14:26:25.181618 5065 scope.go:117] "RemoveContainer" containerID="2819b0c6b2b7b55e58e93125c3aa47a2f34dfcbc258d912b8c16fbcd9ff8481d" Oct 08 14:28:24 crc kubenswrapper[5065]: I1008 14:28:24.375909 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:28:24 crc kubenswrapper[5065]: I1008 14:28:24.376703 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:28:54 crc kubenswrapper[5065]: I1008 14:28:54.375037 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:28:54 crc kubenswrapper[5065]: I1008 14:28:54.375742 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.375280 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.377041 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.377157 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.377888 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.378037 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" gracePeriod=600 Oct 08 14:29:24 crc kubenswrapper[5065]: E1008 14:29:24.509209 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.789686 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" exitCode=0 Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.789768 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268"} Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.789846 5065 scope.go:117] "RemoveContainer" containerID="84d339a745bf200d368f262cdee08695e715cb47078b41ab3e6bc4f3de6ee03f" Oct 08 14:29:24 crc kubenswrapper[5065]: I1008 14:29:24.790462 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:29:24 crc kubenswrapper[5065]: E1008 14:29:24.790783 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.779994 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2h7cj"] Oct 08 14:29:27 crc kubenswrapper[5065]: E1008 14:29:27.782102 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="extract-content" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.782153 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="extract-content" Oct 08 14:29:27 crc kubenswrapper[5065]: E1008 14:29:27.782216 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="extract-utilities" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.782230 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="extract-utilities" Oct 08 14:29:27 crc kubenswrapper[5065]: E1008 14:29:27.782253 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="registry-server" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.782265 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="registry-server" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.782559 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9728f396-7692-46ab-b713-a5a7eee1c511" containerName="registry-server" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.784478 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.796757 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2h7cj"] Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.919290 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-catalog-content\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.919597 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v5gg\" (UniqueName: \"kubernetes.io/projected/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-kube-api-access-8v5gg\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:27 crc kubenswrapper[5065]: I1008 14:29:27.919730 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-utilities\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.021032 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v5gg\" (UniqueName: \"kubernetes.io/projected/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-kube-api-access-8v5gg\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.021206 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-utilities\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.021289 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-catalog-content\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.022178 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-catalog-content\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.022281 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-utilities\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.056719 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v5gg\" (UniqueName: \"kubernetes.io/projected/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-kube-api-access-8v5gg\") pod \"redhat-marketplace-2h7cj\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.132762 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.566244 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2h7cj"] Oct 08 14:29:28 crc kubenswrapper[5065]: W1008 14:29:28.579766 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f0bd48b_9c6c_4352_a83b_ac2e30e8adc3.slice/crio-a0c6033681a08f7951c490561cef94935292170209c652bcac53d6587a9fe3d5 WatchSource:0}: Error finding container a0c6033681a08f7951c490561cef94935292170209c652bcac53d6587a9fe3d5: Status 404 returned error can't find the container with id a0c6033681a08f7951c490561cef94935292170209c652bcac53d6587a9fe3d5 Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.846058 5065 generic.go:334] "Generic (PLEG): container finished" podID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerID="19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878" exitCode=0 Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.846125 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2h7cj" event={"ID":"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3","Type":"ContainerDied","Data":"19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878"} Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.846196 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2h7cj" event={"ID":"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3","Type":"ContainerStarted","Data":"a0c6033681a08f7951c490561cef94935292170209c652bcac53d6587a9fe3d5"} Oct 08 14:29:28 crc kubenswrapper[5065]: I1008 14:29:28.849044 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:29:29 crc kubenswrapper[5065]: I1008 14:29:29.854089 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2h7cj" event={"ID":"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3","Type":"ContainerStarted","Data":"dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f"} Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.572743 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4xjh"] Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.574682 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.582850 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4xjh"] Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.664137 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2nm\" (UniqueName: \"kubernetes.io/projected/538269e2-0aea-40b5-83ef-c419676b0203-kube-api-access-hq2nm\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.664186 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-catalog-content\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.664236 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-utilities\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.765851 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2nm\" (UniqueName: \"kubernetes.io/projected/538269e2-0aea-40b5-83ef-c419676b0203-kube-api-access-hq2nm\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.765896 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-catalog-content\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.765937 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-utilities\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.766504 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-utilities\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.766530 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-catalog-content\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.785251 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2nm\" (UniqueName: \"kubernetes.io/projected/538269e2-0aea-40b5-83ef-c419676b0203-kube-api-access-hq2nm\") pod \"redhat-operators-t4xjh\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.861302 5065 generic.go:334] "Generic (PLEG): container finished" podID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerID="dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f" exitCode=0 Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.861356 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2h7cj" event={"ID":"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3","Type":"ContainerDied","Data":"dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f"} Oct 08 14:29:30 crc kubenswrapper[5065]: I1008 14:29:30.894139 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:31 crc kubenswrapper[5065]: I1008 14:29:31.289460 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4xjh"] Oct 08 14:29:31 crc kubenswrapper[5065]: W1008 14:29:31.290333 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod538269e2_0aea_40b5_83ef_c419676b0203.slice/crio-82f84e96d21d3b8cfafde135b5568aef2808787baf631e0461e728af41ae1b5e WatchSource:0}: Error finding container 82f84e96d21d3b8cfafde135b5568aef2808787baf631e0461e728af41ae1b5e: Status 404 returned error can't find the container with id 82f84e96d21d3b8cfafde135b5568aef2808787baf631e0461e728af41ae1b5e Oct 08 14:29:31 crc kubenswrapper[5065]: I1008 14:29:31.875602 5065 generic.go:334] "Generic (PLEG): container finished" podID="538269e2-0aea-40b5-83ef-c419676b0203" containerID="df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add" exitCode=0 Oct 08 14:29:31 crc kubenswrapper[5065]: I1008 14:29:31.875718 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4xjh" event={"ID":"538269e2-0aea-40b5-83ef-c419676b0203","Type":"ContainerDied","Data":"df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add"} Oct 08 14:29:31 crc kubenswrapper[5065]: I1008 14:29:31.875906 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4xjh" event={"ID":"538269e2-0aea-40b5-83ef-c419676b0203","Type":"ContainerStarted","Data":"82f84e96d21d3b8cfafde135b5568aef2808787baf631e0461e728af41ae1b5e"} Oct 08 14:29:31 crc kubenswrapper[5065]: I1008 14:29:31.879302 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2h7cj" event={"ID":"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3","Type":"ContainerStarted","Data":"7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c"} Oct 08 14:29:33 crc kubenswrapper[5065]: I1008 14:29:33.899437 5065 generic.go:334] "Generic (PLEG): container finished" podID="538269e2-0aea-40b5-83ef-c419676b0203" containerID="3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a" exitCode=0 Oct 08 14:29:33 crc kubenswrapper[5065]: I1008 14:29:33.899484 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4xjh" event={"ID":"538269e2-0aea-40b5-83ef-c419676b0203","Type":"ContainerDied","Data":"3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a"} Oct 08 14:29:33 crc kubenswrapper[5065]: I1008 14:29:33.931700 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2h7cj" podStartSLOduration=4.241584907 podStartE2EDuration="6.931682038s" podCreationTimestamp="2025-10-08 14:29:27 +0000 UTC" firstStartedPulling="2025-10-08 14:29:28.848739623 +0000 UTC m=+4270.626121380" lastFinishedPulling="2025-10-08 14:29:31.538836754 +0000 UTC m=+4273.316218511" observedRunningTime="2025-10-08 14:29:31.925466077 +0000 UTC m=+4273.702847834" watchObservedRunningTime="2025-10-08 14:29:33.931682038 +0000 UTC m=+4275.709063805" Oct 08 14:29:34 crc kubenswrapper[5065]: I1008 14:29:34.912099 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4xjh" event={"ID":"538269e2-0aea-40b5-83ef-c419676b0203","Type":"ContainerStarted","Data":"7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7"} Oct 08 14:29:34 crc kubenswrapper[5065]: I1008 14:29:34.942696 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4xjh" podStartSLOduration=2.460681123 podStartE2EDuration="4.942672318s" podCreationTimestamp="2025-10-08 14:29:30 +0000 UTC" firstStartedPulling="2025-10-08 14:29:31.87678101 +0000 UTC m=+4273.654162767" lastFinishedPulling="2025-10-08 14:29:34.358772185 +0000 UTC m=+4276.136153962" observedRunningTime="2025-10-08 14:29:34.935676647 +0000 UTC m=+4276.713058474" watchObservedRunningTime="2025-10-08 14:29:34.942672318 +0000 UTC m=+4276.720054075" Oct 08 14:29:36 crc kubenswrapper[5065]: I1008 14:29:36.876098 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:29:36 crc kubenswrapper[5065]: E1008 14:29:36.876766 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:29:38 crc kubenswrapper[5065]: I1008 14:29:38.133662 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:38 crc kubenswrapper[5065]: I1008 14:29:38.133858 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:38 crc kubenswrapper[5065]: I1008 14:29:38.185034 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:38 crc kubenswrapper[5065]: I1008 14:29:38.988110 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:39 crc kubenswrapper[5065]: I1008 14:29:39.034062 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2h7cj"] Oct 08 14:29:40 crc kubenswrapper[5065]: I1008 14:29:40.895202 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:40 crc kubenswrapper[5065]: I1008 14:29:40.895670 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:40 crc kubenswrapper[5065]: I1008 14:29:40.949245 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:40 crc kubenswrapper[5065]: I1008 14:29:40.954938 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2h7cj" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="registry-server" containerID="cri-o://7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c" gracePeriod=2 Oct 08 14:29:40 crc kubenswrapper[5065]: I1008 14:29:40.996177 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.326286 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.423614 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-catalog-content\") pod \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.423723 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-utilities\") pod \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.423790 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v5gg\" (UniqueName: \"kubernetes.io/projected/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-kube-api-access-8v5gg\") pod \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\" (UID: \"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3\") " Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.424864 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-utilities" (OuterVolumeSpecName: "utilities") pod "3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" (UID: "3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.425502 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.434589 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-kube-api-access-8v5gg" (OuterVolumeSpecName: "kube-api-access-8v5gg") pod "3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" (UID: "3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3"). InnerVolumeSpecName "kube-api-access-8v5gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.448268 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" (UID: "3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.527280 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.527326 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v5gg\" (UniqueName: \"kubernetes.io/projected/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3-kube-api-access-8v5gg\") on node \"crc\" DevicePath \"\"" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.966407 5065 generic.go:334] "Generic (PLEG): container finished" podID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerID="7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c" exitCode=0 Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.966563 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2h7cj" event={"ID":"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3","Type":"ContainerDied","Data":"7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c"} Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.966634 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2h7cj" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.966663 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2h7cj" event={"ID":"3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3","Type":"ContainerDied","Data":"a0c6033681a08f7951c490561cef94935292170209c652bcac53d6587a9fe3d5"} Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.966703 5065 scope.go:117] "RemoveContainer" containerID="7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c" Oct 08 14:29:41 crc kubenswrapper[5065]: I1008 14:29:41.997092 5065 scope.go:117] "RemoveContainer" containerID="dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.028770 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2h7cj"] Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.035328 5065 scope.go:117] "RemoveContainer" containerID="19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.038955 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2h7cj"] Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.074754 5065 scope.go:117] "RemoveContainer" containerID="7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c" Oct 08 14:29:42 crc kubenswrapper[5065]: E1008 14:29:42.075376 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c\": container with ID starting with 7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c not found: ID does not exist" containerID="7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.075434 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c"} err="failed to get container status \"7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c\": rpc error: code = NotFound desc = could not find container \"7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c\": container with ID starting with 7c939de178c8a475479e68d55e9c4dd4185b9408b7021e679456e79b0b7d156c not found: ID does not exist" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.075482 5065 scope.go:117] "RemoveContainer" containerID="dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f" Oct 08 14:29:42 crc kubenswrapper[5065]: E1008 14:29:42.075857 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f\": container with ID starting with dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f not found: ID does not exist" containerID="dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.075927 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f"} err="failed to get container status \"dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f\": rpc error: code = NotFound desc = could not find container \"dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f\": container with ID starting with dcd9333b8ab73d383dcd320c867e196f0bebc612f158f24ab5ff05ce9880987f not found: ID does not exist" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.075972 5065 scope.go:117] "RemoveContainer" containerID="19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878" Oct 08 14:29:42 crc kubenswrapper[5065]: E1008 14:29:42.076456 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878\": container with ID starting with 19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878 not found: ID does not exist" containerID="19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.076492 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878"} err="failed to get container status \"19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878\": rpc error: code = NotFound desc = could not find container \"19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878\": container with ID starting with 19cf7c4f8e4bc32354344a7b568b897c3a327c80d3ef24d25441483f33fa0878 not found: ID does not exist" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.566859 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4xjh"] Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.886933 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" path="/var/lib/kubelet/pods/3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3/volumes" Oct 08 14:29:42 crc kubenswrapper[5065]: I1008 14:29:42.978066 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t4xjh" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="registry-server" containerID="cri-o://7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7" gracePeriod=2 Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.396929 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.559804 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-utilities\") pod \"538269e2-0aea-40b5-83ef-c419676b0203\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.559958 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2nm\" (UniqueName: \"kubernetes.io/projected/538269e2-0aea-40b5-83ef-c419676b0203-kube-api-access-hq2nm\") pod \"538269e2-0aea-40b5-83ef-c419676b0203\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.560055 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-catalog-content\") pod \"538269e2-0aea-40b5-83ef-c419676b0203\" (UID: \"538269e2-0aea-40b5-83ef-c419676b0203\") " Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.562273 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-utilities" (OuterVolumeSpecName: "utilities") pod "538269e2-0aea-40b5-83ef-c419676b0203" (UID: "538269e2-0aea-40b5-83ef-c419676b0203"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.565553 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538269e2-0aea-40b5-83ef-c419676b0203-kube-api-access-hq2nm" (OuterVolumeSpecName: "kube-api-access-hq2nm") pod "538269e2-0aea-40b5-83ef-c419676b0203" (UID: "538269e2-0aea-40b5-83ef-c419676b0203"). InnerVolumeSpecName "kube-api-access-hq2nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.647695 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "538269e2-0aea-40b5-83ef-c419676b0203" (UID: "538269e2-0aea-40b5-83ef-c419676b0203"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.661850 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.661881 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/538269e2-0aea-40b5-83ef-c419676b0203-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.661894 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq2nm\" (UniqueName: \"kubernetes.io/projected/538269e2-0aea-40b5-83ef-c419676b0203-kube-api-access-hq2nm\") on node \"crc\" DevicePath \"\"" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.988189 5065 generic.go:334] "Generic (PLEG): container finished" podID="538269e2-0aea-40b5-83ef-c419676b0203" containerID="7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7" exitCode=0 Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.988238 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4xjh" event={"ID":"538269e2-0aea-40b5-83ef-c419676b0203","Type":"ContainerDied","Data":"7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7"} Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.988270 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4xjh" event={"ID":"538269e2-0aea-40b5-83ef-c419676b0203","Type":"ContainerDied","Data":"82f84e96d21d3b8cfafde135b5568aef2808787baf631e0461e728af41ae1b5e"} Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.988291 5065 scope.go:117] "RemoveContainer" containerID="7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7" Oct 08 14:29:43 crc kubenswrapper[5065]: I1008 14:29:43.988291 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4xjh" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.026026 5065 scope.go:117] "RemoveContainer" containerID="3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.035333 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4xjh"] Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.042839 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t4xjh"] Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.049216 5065 scope.go:117] "RemoveContainer" containerID="df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.080720 5065 scope.go:117] "RemoveContainer" containerID="7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7" Oct 08 14:29:44 crc kubenswrapper[5065]: E1008 14:29:44.081278 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7\": container with ID starting with 7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7 not found: ID does not exist" containerID="7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.081324 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7"} err="failed to get container status \"7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7\": rpc error: code = NotFound desc = could not find container \"7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7\": container with ID starting with 7f4c40b02a853e108066b0d52e4a5ffdcdd80055368708afb93c60a19980dab7 not found: ID does not exist" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.081359 5065 scope.go:117] "RemoveContainer" containerID="3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a" Oct 08 14:29:44 crc kubenswrapper[5065]: E1008 14:29:44.081835 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a\": container with ID starting with 3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a not found: ID does not exist" containerID="3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.081886 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a"} err="failed to get container status \"3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a\": rpc error: code = NotFound desc = could not find container \"3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a\": container with ID starting with 3cc49e2224fdd10ad690411a8eb884a47b398c5ca9f143842016dbd049f08b2a not found: ID does not exist" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.081916 5065 scope.go:117] "RemoveContainer" containerID="df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add" Oct 08 14:29:44 crc kubenswrapper[5065]: E1008 14:29:44.082196 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add\": container with ID starting with df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add not found: ID does not exist" containerID="df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.082229 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add"} err="failed to get container status \"df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add\": rpc error: code = NotFound desc = could not find container \"df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add\": container with ID starting with df790a520039fc47a2750996a6e22b3032384622cef98407a60e00f4493e6add not found: ID does not exist" Oct 08 14:29:44 crc kubenswrapper[5065]: I1008 14:29:44.892851 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="538269e2-0aea-40b5-83ef-c419676b0203" path="/var/lib/kubelet/pods/538269e2-0aea-40b5-83ef-c419676b0203/volumes" Oct 08 14:29:47 crc kubenswrapper[5065]: I1008 14:29:47.873553 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:29:47 crc kubenswrapper[5065]: E1008 14:29:47.874148 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.188982 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68"] Oct 08 14:30:00 crc kubenswrapper[5065]: E1008 14:30:00.190101 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="extract-utilities" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190122 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="extract-utilities" Oct 08 14:30:00 crc kubenswrapper[5065]: E1008 14:30:00.190137 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="registry-server" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190145 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="registry-server" Oct 08 14:30:00 crc kubenswrapper[5065]: E1008 14:30:00.190156 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="extract-content" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190167 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="extract-content" Oct 08 14:30:00 crc kubenswrapper[5065]: E1008 14:30:00.190188 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="registry-server" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190197 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="registry-server" Oct 08 14:30:00 crc kubenswrapper[5065]: E1008 14:30:00.190215 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="extract-utilities" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190222 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="extract-utilities" Oct 08 14:30:00 crc kubenswrapper[5065]: E1008 14:30:00.190246 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="extract-content" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190254 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="extract-content" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190477 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="538269e2-0aea-40b5-83ef-c419676b0203" containerName="registry-server" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.190521 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0bd48b-9c6c-4352-a83b-ac2e30e8adc3" containerName="registry-server" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.193509 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.194873 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68"] Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.204014 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.204088 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.316090 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8721515-1f87-4921-bdb3-02c9edbc2947-config-volume\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.316206 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxw6\" (UniqueName: \"kubernetes.io/projected/e8721515-1f87-4921-bdb3-02c9edbc2947-kube-api-access-jhxw6\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.316245 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8721515-1f87-4921-bdb3-02c9edbc2947-secret-volume\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.436763 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8721515-1f87-4921-bdb3-02c9edbc2947-config-volume\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.436871 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhxw6\" (UniqueName: \"kubernetes.io/projected/e8721515-1f87-4921-bdb3-02c9edbc2947-kube-api-access-jhxw6\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.436922 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8721515-1f87-4921-bdb3-02c9edbc2947-secret-volume\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.437590 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8721515-1f87-4921-bdb3-02c9edbc2947-config-volume\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.448894 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8721515-1f87-4921-bdb3-02c9edbc2947-secret-volume\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.456130 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhxw6\" (UniqueName: \"kubernetes.io/projected/e8721515-1f87-4921-bdb3-02c9edbc2947-kube-api-access-jhxw6\") pod \"collect-profiles-29332230-4wp68\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.529861 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:00 crc kubenswrapper[5065]: I1008 14:30:00.781660 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68"] Oct 08 14:30:01 crc kubenswrapper[5065]: I1008 14:30:01.144551 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" event={"ID":"e8721515-1f87-4921-bdb3-02c9edbc2947","Type":"ContainerStarted","Data":"792ccec9a02d1588abfa3d5d6a27770ab07d040bb2b17b497e4023d3b3c7ed1b"} Oct 08 14:30:01 crc kubenswrapper[5065]: I1008 14:30:01.144855 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" event={"ID":"e8721515-1f87-4921-bdb3-02c9edbc2947","Type":"ContainerStarted","Data":"ad4f7aa1498a34b3dd88c3b2b28e9d61984233a5d8785bbc5bc1518e88b09347"} Oct 08 14:30:01 crc kubenswrapper[5065]: I1008 14:30:01.167897 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" podStartSLOduration=1.167881324 podStartE2EDuration="1.167881324s" podCreationTimestamp="2025-10-08 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:30:01.165693174 +0000 UTC m=+4302.943074931" watchObservedRunningTime="2025-10-08 14:30:01.167881324 +0000 UTC m=+4302.945263081" Oct 08 14:30:01 crc kubenswrapper[5065]: I1008 14:30:01.873591 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:30:01 crc kubenswrapper[5065]: E1008 14:30:01.874081 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:30:02 crc kubenswrapper[5065]: I1008 14:30:02.153646 5065 generic.go:334] "Generic (PLEG): container finished" podID="e8721515-1f87-4921-bdb3-02c9edbc2947" containerID="792ccec9a02d1588abfa3d5d6a27770ab07d040bb2b17b497e4023d3b3c7ed1b" exitCode=0 Oct 08 14:30:02 crc kubenswrapper[5065]: I1008 14:30:02.153694 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" event={"ID":"e8721515-1f87-4921-bdb3-02c9edbc2947","Type":"ContainerDied","Data":"792ccec9a02d1588abfa3d5d6a27770ab07d040bb2b17b497e4023d3b3c7ed1b"} Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.413194 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.589067 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8721515-1f87-4921-bdb3-02c9edbc2947-config-volume\") pod \"e8721515-1f87-4921-bdb3-02c9edbc2947\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.589217 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8721515-1f87-4921-bdb3-02c9edbc2947-secret-volume\") pod \"e8721515-1f87-4921-bdb3-02c9edbc2947\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.589256 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhxw6\" (UniqueName: \"kubernetes.io/projected/e8721515-1f87-4921-bdb3-02c9edbc2947-kube-api-access-jhxw6\") pod \"e8721515-1f87-4921-bdb3-02c9edbc2947\" (UID: \"e8721515-1f87-4921-bdb3-02c9edbc2947\") " Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.589943 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8721515-1f87-4921-bdb3-02c9edbc2947-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8721515-1f87-4921-bdb3-02c9edbc2947" (UID: "e8721515-1f87-4921-bdb3-02c9edbc2947"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.595819 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8721515-1f87-4921-bdb3-02c9edbc2947-kube-api-access-jhxw6" (OuterVolumeSpecName: "kube-api-access-jhxw6") pod "e8721515-1f87-4921-bdb3-02c9edbc2947" (UID: "e8721515-1f87-4921-bdb3-02c9edbc2947"). InnerVolumeSpecName "kube-api-access-jhxw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.596484 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8721515-1f87-4921-bdb3-02c9edbc2947-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8721515-1f87-4921-bdb3-02c9edbc2947" (UID: "e8721515-1f87-4921-bdb3-02c9edbc2947"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.691980 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8721515-1f87-4921-bdb3-02c9edbc2947-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.692054 5065 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8721515-1f87-4921-bdb3-02c9edbc2947-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:30:03 crc kubenswrapper[5065]: I1008 14:30:03.692081 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhxw6\" (UniqueName: \"kubernetes.io/projected/e8721515-1f87-4921-bdb3-02c9edbc2947-kube-api-access-jhxw6\") on node \"crc\" DevicePath \"\"" Oct 08 14:30:04 crc kubenswrapper[5065]: I1008 14:30:04.171782 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" event={"ID":"e8721515-1f87-4921-bdb3-02c9edbc2947","Type":"ContainerDied","Data":"ad4f7aa1498a34b3dd88c3b2b28e9d61984233a5d8785bbc5bc1518e88b09347"} Oct 08 14:30:04 crc kubenswrapper[5065]: I1008 14:30:04.171835 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4f7aa1498a34b3dd88c3b2b28e9d61984233a5d8785bbc5bc1518e88b09347" Oct 08 14:30:04 crc kubenswrapper[5065]: I1008 14:30:04.171859 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-4wp68" Oct 08 14:30:04 crc kubenswrapper[5065]: I1008 14:30:04.261068 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl"] Oct 08 14:30:04 crc kubenswrapper[5065]: I1008 14:30:04.268039 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332185-lx4fl"] Oct 08 14:30:04 crc kubenswrapper[5065]: I1008 14:30:04.883475 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f327948-557a-4479-b298-36c36ab07724" path="/var/lib/kubelet/pods/0f327948-557a-4479-b298-36c36ab07724/volumes" Oct 08 14:30:12 crc kubenswrapper[5065]: I1008 14:30:12.874309 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:30:12 crc kubenswrapper[5065]: E1008 14:30:12.875291 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:30:23 crc kubenswrapper[5065]: I1008 14:30:23.873592 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:30:23 crc kubenswrapper[5065]: E1008 14:30:23.874655 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:30:29 crc kubenswrapper[5065]: I1008 14:30:29.274983 5065 scope.go:117] "RemoveContainer" containerID="dcaf9326193530dec195f47164bfd2ea22d630f6421aa83e63983cb9ceea9630" Oct 08 14:30:34 crc kubenswrapper[5065]: I1008 14:30:34.873925 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:30:34 crc kubenswrapper[5065]: E1008 14:30:34.875038 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:30:47 crc kubenswrapper[5065]: I1008 14:30:47.873641 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:30:47 crc kubenswrapper[5065]: E1008 14:30:47.874315 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:30:58 crc kubenswrapper[5065]: I1008 14:30:58.880897 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:30:58 crc kubenswrapper[5065]: E1008 14:30:58.881844 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:31:10 crc kubenswrapper[5065]: I1008 14:31:10.874040 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:31:10 crc kubenswrapper[5065]: E1008 14:31:10.875291 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:31:23 crc kubenswrapper[5065]: I1008 14:31:23.876378 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:31:23 crc kubenswrapper[5065]: E1008 14:31:23.877599 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:31:37 crc kubenswrapper[5065]: I1008 14:31:37.874192 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:31:37 crc kubenswrapper[5065]: E1008 14:31:37.875037 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:31:49 crc kubenswrapper[5065]: I1008 14:31:49.874808 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:31:49 crc kubenswrapper[5065]: E1008 14:31:49.875722 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:32:00 crc kubenswrapper[5065]: I1008 14:32:00.874682 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:32:00 crc kubenswrapper[5065]: E1008 14:32:00.875663 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:32:14 crc kubenswrapper[5065]: I1008 14:32:14.873931 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:32:14 crc kubenswrapper[5065]: E1008 14:32:14.874755 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:32:25 crc kubenswrapper[5065]: I1008 14:32:25.873029 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:32:25 crc kubenswrapper[5065]: E1008 14:32:25.873837 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:32:36 crc kubenswrapper[5065]: I1008 14:32:36.876066 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:32:36 crc kubenswrapper[5065]: E1008 14:32:36.876817 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:32:48 crc kubenswrapper[5065]: I1008 14:32:48.883762 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:32:48 crc kubenswrapper[5065]: E1008 14:32:48.884531 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:33:01 crc kubenswrapper[5065]: I1008 14:33:01.873930 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:33:01 crc kubenswrapper[5065]: E1008 14:33:01.875086 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.132632 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qffg2"] Oct 08 14:33:08 crc kubenswrapper[5065]: E1008 14:33:08.133498 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8721515-1f87-4921-bdb3-02c9edbc2947" containerName="collect-profiles" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.133516 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8721515-1f87-4921-bdb3-02c9edbc2947" containerName="collect-profiles" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.133718 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8721515-1f87-4921-bdb3-02c9edbc2947" containerName="collect-profiles" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.134849 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.146323 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qffg2"] Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.301857 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-utilities\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.301922 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh6nt\" (UniqueName: \"kubernetes.io/projected/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-kube-api-access-rh6nt\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.301974 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-catalog-content\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.403293 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-catalog-content\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.403431 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-utilities\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.403476 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh6nt\" (UniqueName: \"kubernetes.io/projected/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-kube-api-access-rh6nt\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.403893 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-catalog-content\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.403939 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-utilities\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.427760 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh6nt\" (UniqueName: \"kubernetes.io/projected/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-kube-api-access-rh6nt\") pod \"community-operators-qffg2\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.513471 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:08 crc kubenswrapper[5065]: I1008 14:33:08.949926 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qffg2"] Oct 08 14:33:09 crc kubenswrapper[5065]: I1008 14:33:09.764646 5065 generic.go:334] "Generic (PLEG): container finished" podID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerID="485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0" exitCode=0 Oct 08 14:33:09 crc kubenswrapper[5065]: I1008 14:33:09.764724 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qffg2" event={"ID":"ce2a11fd-3aa5-4512-b18d-e85382f0ed72","Type":"ContainerDied","Data":"485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0"} Oct 08 14:33:09 crc kubenswrapper[5065]: I1008 14:33:09.764764 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qffg2" event={"ID":"ce2a11fd-3aa5-4512-b18d-e85382f0ed72","Type":"ContainerStarted","Data":"bd979f74b8f6447fce18a87218032c428c4afe16fe1a8ed54e24ede4fa164f20"} Oct 08 14:33:10 crc kubenswrapper[5065]: I1008 14:33:10.774356 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qffg2" event={"ID":"ce2a11fd-3aa5-4512-b18d-e85382f0ed72","Type":"ContainerStarted","Data":"e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4"} Oct 08 14:33:11 crc kubenswrapper[5065]: I1008 14:33:11.787824 5065 generic.go:334] "Generic (PLEG): container finished" podID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerID="e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4" exitCode=0 Oct 08 14:33:11 crc kubenswrapper[5065]: I1008 14:33:11.787920 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qffg2" event={"ID":"ce2a11fd-3aa5-4512-b18d-e85382f0ed72","Type":"ContainerDied","Data":"e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4"} Oct 08 14:33:12 crc kubenswrapper[5065]: I1008 14:33:12.799527 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qffg2" event={"ID":"ce2a11fd-3aa5-4512-b18d-e85382f0ed72","Type":"ContainerStarted","Data":"940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768"} Oct 08 14:33:12 crc kubenswrapper[5065]: I1008 14:33:12.823052 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qffg2" podStartSLOduration=2.036822085 podStartE2EDuration="4.823029207s" podCreationTimestamp="2025-10-08 14:33:08 +0000 UTC" firstStartedPulling="2025-10-08 14:33:09.76782311 +0000 UTC m=+4491.545204907" lastFinishedPulling="2025-10-08 14:33:12.554030262 +0000 UTC m=+4494.331412029" observedRunningTime="2025-10-08 14:33:12.815641376 +0000 UTC m=+4494.593023203" watchObservedRunningTime="2025-10-08 14:33:12.823029207 +0000 UTC m=+4494.600411004" Oct 08 14:33:15 crc kubenswrapper[5065]: I1008 14:33:15.873896 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:33:15 crc kubenswrapper[5065]: E1008 14:33:15.874392 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:33:18 crc kubenswrapper[5065]: I1008 14:33:18.514505 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:18 crc kubenswrapper[5065]: I1008 14:33:18.514953 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:18 crc kubenswrapper[5065]: I1008 14:33:18.591747 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:18 crc kubenswrapper[5065]: I1008 14:33:18.911302 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:18 crc kubenswrapper[5065]: I1008 14:33:18.971816 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qffg2"] Oct 08 14:33:20 crc kubenswrapper[5065]: I1008 14:33:20.868255 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qffg2" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="registry-server" containerID="cri-o://940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768" gracePeriod=2 Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.316861 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.403541 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh6nt\" (UniqueName: \"kubernetes.io/projected/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-kube-api-access-rh6nt\") pod \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.403979 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-utilities\") pod \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.404062 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-catalog-content\") pod \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\" (UID: \"ce2a11fd-3aa5-4512-b18d-e85382f0ed72\") " Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.404757 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-utilities" (OuterVolumeSpecName: "utilities") pod "ce2a11fd-3aa5-4512-b18d-e85382f0ed72" (UID: "ce2a11fd-3aa5-4512-b18d-e85382f0ed72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.410642 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-kube-api-access-rh6nt" (OuterVolumeSpecName: "kube-api-access-rh6nt") pod "ce2a11fd-3aa5-4512-b18d-e85382f0ed72" (UID: "ce2a11fd-3aa5-4512-b18d-e85382f0ed72"). InnerVolumeSpecName "kube-api-access-rh6nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.506543 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh6nt\" (UniqueName: \"kubernetes.io/projected/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-kube-api-access-rh6nt\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.506600 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.879081 5065 generic.go:334] "Generic (PLEG): container finished" podID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerID="940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768" exitCode=0 Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.879127 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qffg2" event={"ID":"ce2a11fd-3aa5-4512-b18d-e85382f0ed72","Type":"ContainerDied","Data":"940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768"} Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.879156 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qffg2" event={"ID":"ce2a11fd-3aa5-4512-b18d-e85382f0ed72","Type":"ContainerDied","Data":"bd979f74b8f6447fce18a87218032c428c4afe16fe1a8ed54e24ede4fa164f20"} Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.879173 5065 scope.go:117] "RemoveContainer" containerID="940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.879248 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qffg2" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.910478 5065 scope.go:117] "RemoveContainer" containerID="e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.946180 5065 scope.go:117] "RemoveContainer" containerID="485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.973434 5065 scope.go:117] "RemoveContainer" containerID="940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768" Oct 08 14:33:21 crc kubenswrapper[5065]: E1008 14:33:21.974038 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768\": container with ID starting with 940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768 not found: ID does not exist" containerID="940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.974083 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768"} err="failed to get container status \"940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768\": rpc error: code = NotFound desc = could not find container \"940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768\": container with ID starting with 940b5b2cd0e38ceef7729b7570f6a3a02de5683d1fb954adb8be0af550ace768 not found: ID does not exist" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.974110 5065 scope.go:117] "RemoveContainer" containerID="e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4" Oct 08 14:33:21 crc kubenswrapper[5065]: E1008 14:33:21.974667 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4\": container with ID starting with e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4 not found: ID does not exist" containerID="e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.974736 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4"} err="failed to get container status \"e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4\": rpc error: code = NotFound desc = could not find container \"e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4\": container with ID starting with e784c85165ad1dc1ca96f7b1dcfea1e169f53f487f7770cb02f9675161c647d4 not found: ID does not exist" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.974786 5065 scope.go:117] "RemoveContainer" containerID="485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0" Oct 08 14:33:21 crc kubenswrapper[5065]: E1008 14:33:21.975263 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0\": container with ID starting with 485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0 not found: ID does not exist" containerID="485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0" Oct 08 14:33:21 crc kubenswrapper[5065]: I1008 14:33:21.975297 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0"} err="failed to get container status \"485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0\": rpc error: code = NotFound desc = could not find container \"485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0\": container with ID starting with 485b18e9eb21deb9da4df941859bad206916f7caedc9d71fc6a66dba1b3cded0 not found: ID does not exist" Oct 08 14:33:22 crc kubenswrapper[5065]: I1008 14:33:22.175607 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce2a11fd-3aa5-4512-b18d-e85382f0ed72" (UID: "ce2a11fd-3aa5-4512-b18d-e85382f0ed72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:33:22 crc kubenswrapper[5065]: I1008 14:33:22.211532 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qffg2"] Oct 08 14:33:22 crc kubenswrapper[5065]: I1008 14:33:22.217487 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce2a11fd-3aa5-4512-b18d-e85382f0ed72-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:22 crc kubenswrapper[5065]: I1008 14:33:22.226128 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qffg2"] Oct 08 14:33:22 crc kubenswrapper[5065]: I1008 14:33:22.888467 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" path="/var/lib/kubelet/pods/ce2a11fd-3aa5-4512-b18d-e85382f0ed72/volumes" Oct 08 14:33:30 crc kubenswrapper[5065]: I1008 14:33:30.875007 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:33:30 crc kubenswrapper[5065]: E1008 14:33:30.876263 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:33:43 crc kubenswrapper[5065]: I1008 14:33:43.873585 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:33:43 crc kubenswrapper[5065]: E1008 14:33:43.874702 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:33:54 crc kubenswrapper[5065]: I1008 14:33:54.878162 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:33:54 crc kubenswrapper[5065]: E1008 14:33:54.879475 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:34:05 crc kubenswrapper[5065]: I1008 14:34:05.873756 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:34:05 crc kubenswrapper[5065]: E1008 14:34:05.874901 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:34:18 crc kubenswrapper[5065]: I1008 14:34:18.882089 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:34:18 crc kubenswrapper[5065]: E1008 14:34:18.883632 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:34:33 crc kubenswrapper[5065]: I1008 14:34:33.874193 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:34:34 crc kubenswrapper[5065]: I1008 14:34:34.540883 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"3d3af3101a610eee407e4cc35cf95788dd5358092821493201c93dcb1ae095eb"} Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.464630 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxlqr"] Oct 08 14:36:48 crc kubenswrapper[5065]: E1008 14:36:48.465505 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="extract-utilities" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.465520 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="extract-utilities" Oct 08 14:36:48 crc kubenswrapper[5065]: E1008 14:36:48.465533 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="registry-server" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.465541 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="registry-server" Oct 08 14:36:48 crc kubenswrapper[5065]: E1008 14:36:48.465560 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="extract-content" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.465569 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="extract-content" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.465727 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2a11fd-3aa5-4512-b18d-e85382f0ed72" containerName="registry-server" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.466779 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.495393 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxlqr"] Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.586467 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-catalog-content\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.586633 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dmt\" (UniqueName: \"kubernetes.io/projected/3c884730-599a-486e-8a45-06f721dbf33a-kube-api-access-x6dmt\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.586684 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-utilities\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.688022 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-catalog-content\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.688093 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dmt\" (UniqueName: \"kubernetes.io/projected/3c884730-599a-486e-8a45-06f721dbf33a-kube-api-access-x6dmt\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.688119 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-utilities\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.690161 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-catalog-content\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.693751 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-utilities\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.717267 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dmt\" (UniqueName: \"kubernetes.io/projected/3c884730-599a-486e-8a45-06f721dbf33a-kube-api-access-x6dmt\") pod \"certified-operators-qxlqr\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:48 crc kubenswrapper[5065]: I1008 14:36:48.791106 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:49 crc kubenswrapper[5065]: I1008 14:36:49.061828 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxlqr"] Oct 08 14:36:49 crc kubenswrapper[5065]: I1008 14:36:49.756632 5065 generic.go:334] "Generic (PLEG): container finished" podID="3c884730-599a-486e-8a45-06f721dbf33a" containerID="2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9" exitCode=0 Oct 08 14:36:49 crc kubenswrapper[5065]: I1008 14:36:49.756876 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlqr" event={"ID":"3c884730-599a-486e-8a45-06f721dbf33a","Type":"ContainerDied","Data":"2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9"} Oct 08 14:36:49 crc kubenswrapper[5065]: I1008 14:36:49.757048 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlqr" event={"ID":"3c884730-599a-486e-8a45-06f721dbf33a","Type":"ContainerStarted","Data":"266bea89c1da9d66808cbd4191863bb6c4957e526c4d34ba7e2eb557bb184ee1"} Oct 08 14:36:49 crc kubenswrapper[5065]: I1008 14:36:49.761014 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:36:50 crc kubenswrapper[5065]: I1008 14:36:50.769470 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlqr" event={"ID":"3c884730-599a-486e-8a45-06f721dbf33a","Type":"ContainerStarted","Data":"a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a"} Oct 08 14:36:51 crc kubenswrapper[5065]: I1008 14:36:51.784834 5065 generic.go:334] "Generic (PLEG): container finished" podID="3c884730-599a-486e-8a45-06f721dbf33a" containerID="a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a" exitCode=0 Oct 08 14:36:51 crc kubenswrapper[5065]: I1008 14:36:51.784901 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlqr" event={"ID":"3c884730-599a-486e-8a45-06f721dbf33a","Type":"ContainerDied","Data":"a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a"} Oct 08 14:36:52 crc kubenswrapper[5065]: I1008 14:36:52.794864 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlqr" event={"ID":"3c884730-599a-486e-8a45-06f721dbf33a","Type":"ContainerStarted","Data":"5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9"} Oct 08 14:36:52 crc kubenswrapper[5065]: I1008 14:36:52.815871 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxlqr" podStartSLOduration=2.355867741 podStartE2EDuration="4.815849383s" podCreationTimestamp="2025-10-08 14:36:48 +0000 UTC" firstStartedPulling="2025-10-08 14:36:49.760585573 +0000 UTC m=+4711.537967370" lastFinishedPulling="2025-10-08 14:36:52.220567265 +0000 UTC m=+4713.997949012" observedRunningTime="2025-10-08 14:36:52.815187325 +0000 UTC m=+4714.592569102" watchObservedRunningTime="2025-10-08 14:36:52.815849383 +0000 UTC m=+4714.593231160" Oct 08 14:36:54 crc kubenswrapper[5065]: I1008 14:36:54.375210 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:36:54 crc kubenswrapper[5065]: I1008 14:36:54.375636 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:36:58 crc kubenswrapper[5065]: I1008 14:36:58.792500 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:58 crc kubenswrapper[5065]: I1008 14:36:58.792951 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:58 crc kubenswrapper[5065]: I1008 14:36:58.909665 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:58 crc kubenswrapper[5065]: I1008 14:36:58.974005 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:36:59 crc kubenswrapper[5065]: I1008 14:36:59.663360 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxlqr"] Oct 08 14:37:00 crc kubenswrapper[5065]: I1008 14:37:00.878595 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxlqr" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="registry-server" containerID="cri-o://5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9" gracePeriod=2 Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.274162 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.290951 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-utilities\") pod \"3c884730-599a-486e-8a45-06f721dbf33a\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.291066 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6dmt\" (UniqueName: \"kubernetes.io/projected/3c884730-599a-486e-8a45-06f721dbf33a-kube-api-access-x6dmt\") pod \"3c884730-599a-486e-8a45-06f721dbf33a\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.291114 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-catalog-content\") pod \"3c884730-599a-486e-8a45-06f721dbf33a\" (UID: \"3c884730-599a-486e-8a45-06f721dbf33a\") " Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.292532 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-utilities" (OuterVolumeSpecName: "utilities") pod "3c884730-599a-486e-8a45-06f721dbf33a" (UID: "3c884730-599a-486e-8a45-06f721dbf33a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.298397 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c884730-599a-486e-8a45-06f721dbf33a-kube-api-access-x6dmt" (OuterVolumeSpecName: "kube-api-access-x6dmt") pod "3c884730-599a-486e-8a45-06f721dbf33a" (UID: "3c884730-599a-486e-8a45-06f721dbf33a"). InnerVolumeSpecName "kube-api-access-x6dmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.335969 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c884730-599a-486e-8a45-06f721dbf33a" (UID: "3c884730-599a-486e-8a45-06f721dbf33a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.392534 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6dmt\" (UniqueName: \"kubernetes.io/projected/3c884730-599a-486e-8a45-06f721dbf33a-kube-api-access-x6dmt\") on node \"crc\" DevicePath \"\"" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.392577 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.392595 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c884730-599a-486e-8a45-06f721dbf33a-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.892333 5065 generic.go:334] "Generic (PLEG): container finished" podID="3c884730-599a-486e-8a45-06f721dbf33a" containerID="5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9" exitCode=0 Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.892445 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlqr" event={"ID":"3c884730-599a-486e-8a45-06f721dbf33a","Type":"ContainerDied","Data":"5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9"} Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.892526 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxlqr" event={"ID":"3c884730-599a-486e-8a45-06f721dbf33a","Type":"ContainerDied","Data":"266bea89c1da9d66808cbd4191863bb6c4957e526c4d34ba7e2eb557bb184ee1"} Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.892558 5065 scope.go:117] "RemoveContainer" containerID="5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.892725 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxlqr" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.928258 5065 scope.go:117] "RemoveContainer" containerID="a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a" Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.959935 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxlqr"] Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.967797 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxlqr"] Oct 08 14:37:01 crc kubenswrapper[5065]: I1008 14:37:01.990709 5065 scope.go:117] "RemoveContainer" containerID="2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9" Oct 08 14:37:02 crc kubenswrapper[5065]: I1008 14:37:02.022158 5065 scope.go:117] "RemoveContainer" containerID="5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9" Oct 08 14:37:02 crc kubenswrapper[5065]: E1008 14:37:02.022729 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9\": container with ID starting with 5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9 not found: ID does not exist" containerID="5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9" Oct 08 14:37:02 crc kubenswrapper[5065]: I1008 14:37:02.022806 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9"} err="failed to get container status \"5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9\": rpc error: code = NotFound desc = could not find container \"5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9\": container with ID starting with 5ccb1b903c49e0814e062653e988b3b0a4a2233cf39f1975de64737f5e0e8fd9 not found: ID does not exist" Oct 08 14:37:02 crc kubenswrapper[5065]: I1008 14:37:02.022850 5065 scope.go:117] "RemoveContainer" containerID="a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a" Oct 08 14:37:02 crc kubenswrapper[5065]: E1008 14:37:02.023385 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a\": container with ID starting with a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a not found: ID does not exist" containerID="a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a" Oct 08 14:37:02 crc kubenswrapper[5065]: I1008 14:37:02.023454 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a"} err="failed to get container status \"a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a\": rpc error: code = NotFound desc = could not find container \"a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a\": container with ID starting with a9394422017788c97959318024e6f8ea1bb6ae3b84e8189bf0da5900fa23ef4a not found: ID does not exist" Oct 08 14:37:02 crc kubenswrapper[5065]: I1008 14:37:02.023483 5065 scope.go:117] "RemoveContainer" containerID="2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9" Oct 08 14:37:02 crc kubenswrapper[5065]: E1008 14:37:02.024020 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9\": container with ID starting with 2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9 not found: ID does not exist" containerID="2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9" Oct 08 14:37:02 crc kubenswrapper[5065]: I1008 14:37:02.024046 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9"} err="failed to get container status \"2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9\": rpc error: code = NotFound desc = could not find container \"2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9\": container with ID starting with 2840f8696a09d5d5cfc067f9c1c2bc4a585b09d8d64488af6317b4581f2932f9 not found: ID does not exist" Oct 08 14:37:02 crc kubenswrapper[5065]: I1008 14:37:02.889552 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c884730-599a-486e-8a45-06f721dbf33a" path="/var/lib/kubelet/pods/3c884730-599a-486e-8a45-06f721dbf33a/volumes" Oct 08 14:37:24 crc kubenswrapper[5065]: I1008 14:37:24.376556 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:37:24 crc kubenswrapper[5065]: I1008 14:37:24.377161 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:37:54 crc kubenswrapper[5065]: I1008 14:37:54.375266 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:37:54 crc kubenswrapper[5065]: I1008 14:37:54.375837 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:37:54 crc kubenswrapper[5065]: I1008 14:37:54.375897 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:37:54 crc kubenswrapper[5065]: I1008 14:37:54.376761 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d3af3101a610eee407e4cc35cf95788dd5358092821493201c93dcb1ae095eb"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:37:54 crc kubenswrapper[5065]: I1008 14:37:54.376844 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://3d3af3101a610eee407e4cc35cf95788dd5358092821493201c93dcb1ae095eb" gracePeriod=600 Oct 08 14:37:55 crc kubenswrapper[5065]: I1008 14:37:55.364306 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="3d3af3101a610eee407e4cc35cf95788dd5358092821493201c93dcb1ae095eb" exitCode=0 Oct 08 14:37:55 crc kubenswrapper[5065]: I1008 14:37:55.364378 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"3d3af3101a610eee407e4cc35cf95788dd5358092821493201c93dcb1ae095eb"} Oct 08 14:37:55 crc kubenswrapper[5065]: I1008 14:37:55.364821 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0"} Oct 08 14:37:55 crc kubenswrapper[5065]: I1008 14:37:55.364854 5065 scope.go:117] "RemoveContainer" containerID="7da743cc6e4bbba24adb80539c7c7f4eef76895a3e7f90cea50dd6d20bb51268" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.862915 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-hhd9n"] Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.872333 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-hhd9n"] Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.972083 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-j9dlj"] Oct 08 14:38:09 crc kubenswrapper[5065]: E1008 14:38:09.972395 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="extract-content" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.972436 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="extract-content" Oct 08 14:38:09 crc kubenswrapper[5065]: E1008 14:38:09.972460 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="extract-utilities" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.972470 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="extract-utilities" Oct 08 14:38:09 crc kubenswrapper[5065]: E1008 14:38:09.972494 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="registry-server" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.972502 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="registry-server" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.972673 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c884730-599a-486e-8a45-06f721dbf33a" containerName="registry-server" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.973268 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.975154 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.975555 5065 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bsn6p" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.975558 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.975619 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 08 14:38:09 crc kubenswrapper[5065]: I1008 14:38:09.984312 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-j9dlj"] Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.126049 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/274be603-892a-4443-942c-1a84e997a2a3-crc-storage\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.126665 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/274be603-892a-4443-942c-1a84e997a2a3-node-mnt\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.126777 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gmcf\" (UniqueName: \"kubernetes.io/projected/274be603-892a-4443-942c-1a84e997a2a3-kube-api-access-5gmcf\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.228062 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gmcf\" (UniqueName: \"kubernetes.io/projected/274be603-892a-4443-942c-1a84e997a2a3-kube-api-access-5gmcf\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.228166 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/274be603-892a-4443-942c-1a84e997a2a3-crc-storage\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.228267 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/274be603-892a-4443-942c-1a84e997a2a3-node-mnt\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.228737 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/274be603-892a-4443-942c-1a84e997a2a3-node-mnt\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.229259 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/274be603-892a-4443-942c-1a84e997a2a3-crc-storage\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.265500 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gmcf\" (UniqueName: \"kubernetes.io/projected/274be603-892a-4443-942c-1a84e997a2a3-kube-api-access-5gmcf\") pod \"crc-storage-crc-j9dlj\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.295718 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.821628 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-j9dlj"] Oct 08 14:38:10 crc kubenswrapper[5065]: W1008 14:38:10.829035 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod274be603_892a_4443_942c_1a84e997a2a3.slice/crio-8d31b19e2611b246d7376c8e1d92dbc572d4c366fdc19f9823883e2280201397 WatchSource:0}: Error finding container 8d31b19e2611b246d7376c8e1d92dbc572d4c366fdc19f9823883e2280201397: Status 404 returned error can't find the container with id 8d31b19e2611b246d7376c8e1d92dbc572d4c366fdc19f9823883e2280201397 Oct 08 14:38:10 crc kubenswrapper[5065]: I1008 14:38:10.891348 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e111c5-5a16-4675-a8eb-d1ae6a879e11" path="/var/lib/kubelet/pods/d8e111c5-5a16-4675-a8eb-d1ae6a879e11/volumes" Oct 08 14:38:11 crc kubenswrapper[5065]: I1008 14:38:11.516375 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j9dlj" event={"ID":"274be603-892a-4443-942c-1a84e997a2a3","Type":"ContainerStarted","Data":"8d31b19e2611b246d7376c8e1d92dbc572d4c366fdc19f9823883e2280201397"} Oct 08 14:38:12 crc kubenswrapper[5065]: I1008 14:38:12.530221 5065 generic.go:334] "Generic (PLEG): container finished" podID="274be603-892a-4443-942c-1a84e997a2a3" containerID="ace567a042dc9f3d8ce74137b4ca2be1a3d5b1f0345433a115ff2e898da07872" exitCode=0 Oct 08 14:38:12 crc kubenswrapper[5065]: I1008 14:38:12.530377 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j9dlj" event={"ID":"274be603-892a-4443-942c-1a84e997a2a3","Type":"ContainerDied","Data":"ace567a042dc9f3d8ce74137b4ca2be1a3d5b1f0345433a115ff2e898da07872"} Oct 08 14:38:13 crc kubenswrapper[5065]: I1008 14:38:13.843613 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:13 crc kubenswrapper[5065]: I1008 14:38:13.981493 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gmcf\" (UniqueName: \"kubernetes.io/projected/274be603-892a-4443-942c-1a84e997a2a3-kube-api-access-5gmcf\") pod \"274be603-892a-4443-942c-1a84e997a2a3\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " Oct 08 14:38:13 crc kubenswrapper[5065]: I1008 14:38:13.982129 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/274be603-892a-4443-942c-1a84e997a2a3-crc-storage\") pod \"274be603-892a-4443-942c-1a84e997a2a3\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " Oct 08 14:38:13 crc kubenswrapper[5065]: I1008 14:38:13.982312 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/274be603-892a-4443-942c-1a84e997a2a3-node-mnt\") pod \"274be603-892a-4443-942c-1a84e997a2a3\" (UID: \"274be603-892a-4443-942c-1a84e997a2a3\") " Oct 08 14:38:13 crc kubenswrapper[5065]: I1008 14:38:13.982661 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/274be603-892a-4443-942c-1a84e997a2a3-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "274be603-892a-4443-942c-1a84e997a2a3" (UID: "274be603-892a-4443-942c-1a84e997a2a3"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:38:13 crc kubenswrapper[5065]: I1008 14:38:13.982971 5065 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/274be603-892a-4443-942c-1a84e997a2a3-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 08 14:38:14 crc kubenswrapper[5065]: I1008 14:38:14.000100 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274be603-892a-4443-942c-1a84e997a2a3-kube-api-access-5gmcf" (OuterVolumeSpecName: "kube-api-access-5gmcf") pod "274be603-892a-4443-942c-1a84e997a2a3" (UID: "274be603-892a-4443-942c-1a84e997a2a3"). InnerVolumeSpecName "kube-api-access-5gmcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:38:14 crc kubenswrapper[5065]: I1008 14:38:14.015867 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274be603-892a-4443-942c-1a84e997a2a3-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "274be603-892a-4443-942c-1a84e997a2a3" (UID: "274be603-892a-4443-942c-1a84e997a2a3"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:38:14 crc kubenswrapper[5065]: I1008 14:38:14.085264 5065 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/274be603-892a-4443-942c-1a84e997a2a3-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 08 14:38:14 crc kubenswrapper[5065]: I1008 14:38:14.085324 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gmcf\" (UniqueName: \"kubernetes.io/projected/274be603-892a-4443-942c-1a84e997a2a3-kube-api-access-5gmcf\") on node \"crc\" DevicePath \"\"" Oct 08 14:38:14 crc kubenswrapper[5065]: I1008 14:38:14.558454 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j9dlj" event={"ID":"274be603-892a-4443-942c-1a84e997a2a3","Type":"ContainerDied","Data":"8d31b19e2611b246d7376c8e1d92dbc572d4c366fdc19f9823883e2280201397"} Oct 08 14:38:14 crc kubenswrapper[5065]: I1008 14:38:14.558500 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d31b19e2611b246d7376c8e1d92dbc572d4c366fdc19f9823883e2280201397" Oct 08 14:38:14 crc kubenswrapper[5065]: I1008 14:38:14.558548 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9dlj" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.039118 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-j9dlj"] Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.045727 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-j9dlj"] Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.176960 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-6qsrh"] Oct 08 14:38:16 crc kubenswrapper[5065]: E1008 14:38:16.177404 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274be603-892a-4443-942c-1a84e997a2a3" containerName="storage" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.177462 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="274be603-892a-4443-942c-1a84e997a2a3" containerName="storage" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.177828 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="274be603-892a-4443-942c-1a84e997a2a3" containerName="storage" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.179077 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.183839 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.183975 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.184181 5065 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bsn6p" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.184410 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.196937 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6qsrh"] Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.321772 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr266\" (UniqueName: \"kubernetes.io/projected/4cf09896-f563-4c1c-b34e-a3a0c4318354-kube-api-access-gr266\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.321864 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4cf09896-f563-4c1c-b34e-a3a0c4318354-crc-storage\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.321916 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4cf09896-f563-4c1c-b34e-a3a0c4318354-node-mnt\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.423277 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr266\" (UniqueName: \"kubernetes.io/projected/4cf09896-f563-4c1c-b34e-a3a0c4318354-kube-api-access-gr266\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.423354 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4cf09896-f563-4c1c-b34e-a3a0c4318354-crc-storage\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.423563 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4cf09896-f563-4c1c-b34e-a3a0c4318354-node-mnt\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.423951 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4cf09896-f563-4c1c-b34e-a3a0c4318354-node-mnt\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.424219 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4cf09896-f563-4c1c-b34e-a3a0c4318354-crc-storage\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.440469 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr266\" (UniqueName: \"kubernetes.io/projected/4cf09896-f563-4c1c-b34e-a3a0c4318354-kube-api-access-gr266\") pod \"crc-storage-crc-6qsrh\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.506811 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.887257 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274be603-892a-4443-942c-1a84e997a2a3" path="/var/lib/kubelet/pods/274be603-892a-4443-942c-1a84e997a2a3/volumes" Oct 08 14:38:16 crc kubenswrapper[5065]: I1008 14:38:16.949318 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6qsrh"] Oct 08 14:38:17 crc kubenswrapper[5065]: I1008 14:38:17.582015 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6qsrh" event={"ID":"4cf09896-f563-4c1c-b34e-a3a0c4318354","Type":"ContainerStarted","Data":"0ed96e298fbd3c087ffc7152e461357e21eca4f28006a5505161b703bb5618fa"} Oct 08 14:38:18 crc kubenswrapper[5065]: I1008 14:38:18.595411 5065 generic.go:334] "Generic (PLEG): container finished" podID="4cf09896-f563-4c1c-b34e-a3a0c4318354" containerID="39ed3acb4d33ccbfcf7a9adccfdc08ddb28c0f4e7057ca23e15d6126216d3fce" exitCode=0 Oct 08 14:38:18 crc kubenswrapper[5065]: I1008 14:38:18.595576 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6qsrh" event={"ID":"4cf09896-f563-4c1c-b34e-a3a0c4318354","Type":"ContainerDied","Data":"39ed3acb4d33ccbfcf7a9adccfdc08ddb28c0f4e7057ca23e15d6126216d3fce"} Oct 08 14:38:19 crc kubenswrapper[5065]: I1008 14:38:19.952502 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:19 crc kubenswrapper[5065]: I1008 14:38:19.977524 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4cf09896-f563-4c1c-b34e-a3a0c4318354-node-mnt\") pod \"4cf09896-f563-4c1c-b34e-a3a0c4318354\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " Oct 08 14:38:19 crc kubenswrapper[5065]: I1008 14:38:19.977642 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf09896-f563-4c1c-b34e-a3a0c4318354-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "4cf09896-f563-4c1c-b34e-a3a0c4318354" (UID: "4cf09896-f563-4c1c-b34e-a3a0c4318354"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:38:19 crc kubenswrapper[5065]: I1008 14:38:19.977712 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4cf09896-f563-4c1c-b34e-a3a0c4318354-crc-storage\") pod \"4cf09896-f563-4c1c-b34e-a3a0c4318354\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " Oct 08 14:38:19 crc kubenswrapper[5065]: I1008 14:38:19.977765 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr266\" (UniqueName: \"kubernetes.io/projected/4cf09896-f563-4c1c-b34e-a3a0c4318354-kube-api-access-gr266\") pod \"4cf09896-f563-4c1c-b34e-a3a0c4318354\" (UID: \"4cf09896-f563-4c1c-b34e-a3a0c4318354\") " Oct 08 14:38:19 crc kubenswrapper[5065]: I1008 14:38:19.978224 5065 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4cf09896-f563-4c1c-b34e-a3a0c4318354-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 08 14:38:19 crc kubenswrapper[5065]: I1008 14:38:19.982841 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf09896-f563-4c1c-b34e-a3a0c4318354-kube-api-access-gr266" (OuterVolumeSpecName: "kube-api-access-gr266") pod "4cf09896-f563-4c1c-b34e-a3a0c4318354" (UID: "4cf09896-f563-4c1c-b34e-a3a0c4318354"). InnerVolumeSpecName "kube-api-access-gr266". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:38:20 crc kubenswrapper[5065]: I1008 14:38:20.000404 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf09896-f563-4c1c-b34e-a3a0c4318354-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "4cf09896-f563-4c1c-b34e-a3a0c4318354" (UID: "4cf09896-f563-4c1c-b34e-a3a0c4318354"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:38:20 crc kubenswrapper[5065]: I1008 14:38:20.079161 5065 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4cf09896-f563-4c1c-b34e-a3a0c4318354-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 08 14:38:20 crc kubenswrapper[5065]: I1008 14:38:20.079200 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr266\" (UniqueName: \"kubernetes.io/projected/4cf09896-f563-4c1c-b34e-a3a0c4318354-kube-api-access-gr266\") on node \"crc\" DevicePath \"\"" Oct 08 14:38:20 crc kubenswrapper[5065]: I1008 14:38:20.617611 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6qsrh" event={"ID":"4cf09896-f563-4c1c-b34e-a3a0c4318354","Type":"ContainerDied","Data":"0ed96e298fbd3c087ffc7152e461357e21eca4f28006a5505161b703bb5618fa"} Oct 08 14:38:20 crc kubenswrapper[5065]: I1008 14:38:20.617666 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed96e298fbd3c087ffc7152e461357e21eca4f28006a5505161b703bb5618fa" Oct 08 14:38:20 crc kubenswrapper[5065]: I1008 14:38:20.617678 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6qsrh" Oct 08 14:38:20 crc kubenswrapper[5065]: E1008 14:38:20.807841 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf09896_f563_4c1c_b34e_a3a0c4318354.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf09896_f563_4c1c_b34e_a3a0c4318354.slice/crio-0ed96e298fbd3c087ffc7152e461357e21eca4f28006a5505161b703bb5618fa\": RecentStats: unable to find data in memory cache]" Oct 08 14:38:29 crc kubenswrapper[5065]: I1008 14:38:29.508374 5065 scope.go:117] "RemoveContainer" containerID="2e2f1283121aa17e65e981235cc5f1b1e6eec09949062844b8db4dfe9ff3f371" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.697395 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8969b"] Oct 08 14:39:33 crc kubenswrapper[5065]: E1008 14:39:33.698566 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf09896-f563-4c1c-b34e-a3a0c4318354" containerName="storage" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.698595 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf09896-f563-4c1c-b34e-a3a0c4318354" containerName="storage" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.698947 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf09896-f563-4c1c-b34e-a3a0c4318354" containerName="storage" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.700860 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.721862 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8969b"] Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.749282 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-catalog-content\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.749364 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d4cs\" (UniqueName: \"kubernetes.io/projected/a9beb80b-870e-4f3a-99c3-39d850c848e2-kube-api-access-9d4cs\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.749398 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-utilities\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.850582 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-catalog-content\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.850647 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d4cs\" (UniqueName: \"kubernetes.io/projected/a9beb80b-870e-4f3a-99c3-39d850c848e2-kube-api-access-9d4cs\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.850690 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-utilities\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.851241 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-catalog-content\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.851270 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-utilities\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:33 crc kubenswrapper[5065]: I1008 14:39:33.873777 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d4cs\" (UniqueName: \"kubernetes.io/projected/a9beb80b-870e-4f3a-99c3-39d850c848e2-kube-api-access-9d4cs\") pod \"redhat-marketplace-8969b\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:34 crc kubenswrapper[5065]: I1008 14:39:34.046171 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:34 crc kubenswrapper[5065]: I1008 14:39:34.280163 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8969b"] Oct 08 14:39:35 crc kubenswrapper[5065]: I1008 14:39:35.299747 5065 generic.go:334] "Generic (PLEG): container finished" podID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerID="cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf" exitCode=0 Oct 08 14:39:35 crc kubenswrapper[5065]: I1008 14:39:35.299960 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8969b" event={"ID":"a9beb80b-870e-4f3a-99c3-39d850c848e2","Type":"ContainerDied","Data":"cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf"} Oct 08 14:39:35 crc kubenswrapper[5065]: I1008 14:39:35.300399 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8969b" event={"ID":"a9beb80b-870e-4f3a-99c3-39d850c848e2","Type":"ContainerStarted","Data":"09715f0b090bace6882aba9cafdbd36da606edf4f05542e31afb31394c8d39bd"} Oct 08 14:39:36 crc kubenswrapper[5065]: I1008 14:39:36.311613 5065 generic.go:334] "Generic (PLEG): container finished" podID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerID="74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f" exitCode=0 Oct 08 14:39:36 crc kubenswrapper[5065]: I1008 14:39:36.311832 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8969b" event={"ID":"a9beb80b-870e-4f3a-99c3-39d850c848e2","Type":"ContainerDied","Data":"74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f"} Oct 08 14:39:37 crc kubenswrapper[5065]: I1008 14:39:37.326178 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8969b" event={"ID":"a9beb80b-870e-4f3a-99c3-39d850c848e2","Type":"ContainerStarted","Data":"e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b"} Oct 08 14:39:37 crc kubenswrapper[5065]: I1008 14:39:37.357551 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8969b" podStartSLOduration=2.89062523 podStartE2EDuration="4.357525713s" podCreationTimestamp="2025-10-08 14:39:33 +0000 UTC" firstStartedPulling="2025-10-08 14:39:35.30226737 +0000 UTC m=+4877.079649137" lastFinishedPulling="2025-10-08 14:39:36.769167823 +0000 UTC m=+4878.546549620" observedRunningTime="2025-10-08 14:39:37.350979544 +0000 UTC m=+4879.128361351" watchObservedRunningTime="2025-10-08 14:39:37.357525713 +0000 UTC m=+4879.134907500" Oct 08 14:39:44 crc kubenswrapper[5065]: I1008 14:39:44.046811 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:44 crc kubenswrapper[5065]: I1008 14:39:44.047461 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:44 crc kubenswrapper[5065]: I1008 14:39:44.094885 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:44 crc kubenswrapper[5065]: I1008 14:39:44.457928 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:44 crc kubenswrapper[5065]: I1008 14:39:44.505297 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8969b"] Oct 08 14:39:46 crc kubenswrapper[5065]: I1008 14:39:46.406007 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8969b" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="registry-server" containerID="cri-o://e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b" gracePeriod=2 Oct 08 14:39:46 crc kubenswrapper[5065]: I1008 14:39:46.782026 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:46 crc kubenswrapper[5065]: I1008 14:39:46.979126 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-catalog-content\") pod \"a9beb80b-870e-4f3a-99c3-39d850c848e2\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " Oct 08 14:39:46 crc kubenswrapper[5065]: I1008 14:39:46.979482 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-utilities\") pod \"a9beb80b-870e-4f3a-99c3-39d850c848e2\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " Oct 08 14:39:46 crc kubenswrapper[5065]: I1008 14:39:46.979522 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d4cs\" (UniqueName: \"kubernetes.io/projected/a9beb80b-870e-4f3a-99c3-39d850c848e2-kube-api-access-9d4cs\") pod \"a9beb80b-870e-4f3a-99c3-39d850c848e2\" (UID: \"a9beb80b-870e-4f3a-99c3-39d850c848e2\") " Oct 08 14:39:46 crc kubenswrapper[5065]: I1008 14:39:46.981383 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-utilities" (OuterVolumeSpecName: "utilities") pod "a9beb80b-870e-4f3a-99c3-39d850c848e2" (UID: "a9beb80b-870e-4f3a-99c3-39d850c848e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:39:46 crc kubenswrapper[5065]: I1008 14:39:46.988884 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9beb80b-870e-4f3a-99c3-39d850c848e2-kube-api-access-9d4cs" (OuterVolumeSpecName: "kube-api-access-9d4cs") pod "a9beb80b-870e-4f3a-99c3-39d850c848e2" (UID: "a9beb80b-870e-4f3a-99c3-39d850c848e2"). InnerVolumeSpecName "kube-api-access-9d4cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.006950 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9beb80b-870e-4f3a-99c3-39d850c848e2" (UID: "a9beb80b-870e-4f3a-99c3-39d850c848e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.081221 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.081267 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9beb80b-870e-4f3a-99c3-39d850c848e2-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.081278 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d4cs\" (UniqueName: \"kubernetes.io/projected/a9beb80b-870e-4f3a-99c3-39d850c848e2-kube-api-access-9d4cs\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.415847 5065 generic.go:334] "Generic (PLEG): container finished" podID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerID="e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b" exitCode=0 Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.415894 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8969b" event={"ID":"a9beb80b-870e-4f3a-99c3-39d850c848e2","Type":"ContainerDied","Data":"e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b"} Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.415952 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8969b" event={"ID":"a9beb80b-870e-4f3a-99c3-39d850c848e2","Type":"ContainerDied","Data":"09715f0b090bace6882aba9cafdbd36da606edf4f05542e31afb31394c8d39bd"} Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.415973 5065 scope.go:117] "RemoveContainer" containerID="e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.416045 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8969b" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.443640 5065 scope.go:117] "RemoveContainer" containerID="74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.459022 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8969b"] Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.464599 5065 scope.go:117] "RemoveContainer" containerID="cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.471105 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8969b"] Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.489890 5065 scope.go:117] "RemoveContainer" containerID="e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b" Oct 08 14:39:47 crc kubenswrapper[5065]: E1008 14:39:47.490348 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b\": container with ID starting with e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b not found: ID does not exist" containerID="e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.490386 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b"} err="failed to get container status \"e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b\": rpc error: code = NotFound desc = could not find container \"e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b\": container with ID starting with e17ccabc34bbdc93590c9328a13c6dd458d5f4b9f416b2194eb7a3552628ba9b not found: ID does not exist" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.490434 5065 scope.go:117] "RemoveContainer" containerID="74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f" Oct 08 14:39:47 crc kubenswrapper[5065]: E1008 14:39:47.490817 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f\": container with ID starting with 74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f not found: ID does not exist" containerID="74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.490859 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f"} err="failed to get container status \"74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f\": rpc error: code = NotFound desc = could not find container \"74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f\": container with ID starting with 74118780321d24d9f8ba5c7dd19e5a81cb523f32f91a72e9aaeedce10ec1077f not found: ID does not exist" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.490890 5065 scope.go:117] "RemoveContainer" containerID="cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf" Oct 08 14:39:47 crc kubenswrapper[5065]: E1008 14:39:47.491229 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf\": container with ID starting with cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf not found: ID does not exist" containerID="cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf" Oct 08 14:39:47 crc kubenswrapper[5065]: I1008 14:39:47.491261 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf"} err="failed to get container status \"cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf\": rpc error: code = NotFound desc = could not find container \"cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf\": container with ID starting with cc0cd960d54f365a8981e15c558260b4c5a8226dd66911f677cd07fbe665dfdf not found: ID does not exist" Oct 08 14:39:48 crc kubenswrapper[5065]: I1008 14:39:48.889873 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" path="/var/lib/kubelet/pods/a9beb80b-870e-4f3a-99c3-39d850c848e2/volumes" Oct 08 14:39:54 crc kubenswrapper[5065]: I1008 14:39:54.375807 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:39:54 crc kubenswrapper[5065]: I1008 14:39:54.376373 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.269299 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kb7qb"] Oct 08 14:40:11 crc kubenswrapper[5065]: E1008 14:40:11.270205 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="extract-content" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.270222 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="extract-content" Oct 08 14:40:11 crc kubenswrapper[5065]: E1008 14:40:11.270239 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="registry-server" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.270245 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="registry-server" Oct 08 14:40:11 crc kubenswrapper[5065]: E1008 14:40:11.270253 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="extract-utilities" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.270260 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="extract-utilities" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.270506 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9beb80b-870e-4f3a-99c3-39d850c848e2" containerName="registry-server" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.271716 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.278265 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb7qb"] Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.373896 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gd5q\" (UniqueName: \"kubernetes.io/projected/1d471769-19ef-4399-8dd2-eb1d2f13faaa-kube-api-access-7gd5q\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.373959 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-utilities\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.374051 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-catalog-content\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.475378 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gd5q\" (UniqueName: \"kubernetes.io/projected/1d471769-19ef-4399-8dd2-eb1d2f13faaa-kube-api-access-7gd5q\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.475454 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-utilities\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.475498 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-catalog-content\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.475900 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-utilities\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.475930 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-catalog-content\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.501444 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gd5q\" (UniqueName: \"kubernetes.io/projected/1d471769-19ef-4399-8dd2-eb1d2f13faaa-kube-api-access-7gd5q\") pod \"redhat-operators-kb7qb\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:11 crc kubenswrapper[5065]: I1008 14:40:11.594240 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:12 crc kubenswrapper[5065]: I1008 14:40:12.035262 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb7qb"] Oct 08 14:40:12 crc kubenswrapper[5065]: I1008 14:40:12.655834 5065 generic.go:334] "Generic (PLEG): container finished" podID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerID="c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f" exitCode=0 Oct 08 14:40:12 crc kubenswrapper[5065]: I1008 14:40:12.655884 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb7qb" event={"ID":"1d471769-19ef-4399-8dd2-eb1d2f13faaa","Type":"ContainerDied","Data":"c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f"} Oct 08 14:40:12 crc kubenswrapper[5065]: I1008 14:40:12.655918 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb7qb" event={"ID":"1d471769-19ef-4399-8dd2-eb1d2f13faaa","Type":"ContainerStarted","Data":"d745f09eb60407bacdf285f909daae28ff3394cbc2f5e9d3e90d83e8e0d5f8cd"} Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.155686 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-h76rl"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.157244 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.160307 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.160551 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.160745 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.163312 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.163554 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8rg62" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.171820 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-678578b8df-jtg49"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.174703 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.178022 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-h76rl"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.192987 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-jtg49"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.200681 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdcm\" (UniqueName: \"kubernetes.io/projected/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-kube-api-access-qkdcm\") pod \"dnsmasq-dns-678578b8df-jtg49\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.200878 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4d9\" (UniqueName: \"kubernetes.io/projected/b3059e28-f84e-4cce-99e2-580e675c4580-kube-api-access-qg4d9\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.200948 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-config\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.201140 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.201216 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-config\") pod \"dnsmasq-dns-678578b8df-jtg49\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.302192 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.302237 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-config\") pod \"dnsmasq-dns-678578b8df-jtg49\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.302286 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdcm\" (UniqueName: \"kubernetes.io/projected/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-kube-api-access-qkdcm\") pod \"dnsmasq-dns-678578b8df-jtg49\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.302330 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg4d9\" (UniqueName: \"kubernetes.io/projected/b3059e28-f84e-4cce-99e2-580e675c4580-kube-api-access-qg4d9\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.302346 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-config\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.303259 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-config\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.303886 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-config\") pod \"dnsmasq-dns-678578b8df-jtg49\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.304345 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.338239 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg4d9\" (UniqueName: \"kubernetes.io/projected/b3059e28-f84e-4cce-99e2-580e675c4580-kube-api-access-qg4d9\") pod \"dnsmasq-dns-6b8f87f5c5-h76rl\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.340167 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdcm\" (UniqueName: \"kubernetes.io/projected/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-kube-api-access-qkdcm\") pod \"dnsmasq-dns-678578b8df-jtg49\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.483573 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.494849 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.569118 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-jtg49"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.609438 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b7964457-s5k6d"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.610861 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.618363 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-s5k6d"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.807463 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-config\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.807791 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-dns-svc\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.807863 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqqq\" (UniqueName: \"kubernetes.io/projected/42105b65-e62a-47e6-9290-132f277aa57f-kube-api-access-mqqqq\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.843907 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-jtg49"] Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.908733 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-config\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.908775 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-dns-svc\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.908843 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqqqq\" (UniqueName: \"kubernetes.io/projected/42105b65-e62a-47e6-9290-132f277aa57f-kube-api-access-mqqqq\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.909573 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-dns-svc\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.909628 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-config\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.928854 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqqqq\" (UniqueName: \"kubernetes.io/projected/42105b65-e62a-47e6-9290-132f277aa57f-kube-api-access-mqqqq\") pod \"dnsmasq-dns-8b7964457-s5k6d\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.949802 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:13 crc kubenswrapper[5065]: I1008 14:40:13.990746 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-h76rl"] Oct 08 14:40:14 crc kubenswrapper[5065]: W1008 14:40:14.004656 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3059e28_f84e_4cce_99e2_580e675c4580.slice/crio-2ea9948131335c5b2b6d77edf5c423c3b9d5c9fd436973b3f7b000f17cd53259 WatchSource:0}: Error finding container 2ea9948131335c5b2b6d77edf5c423c3b9d5c9fd436973b3f7b000f17cd53259: Status 404 returned error can't find the container with id 2ea9948131335c5b2b6d77edf5c423c3b9d5c9fd436973b3f7b000f17cd53259 Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.081808 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-h76rl"] Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.102274 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-rrptn"] Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.103460 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.114506 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-config\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.114590 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.114632 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4zv\" (UniqueName: \"kubernetes.io/projected/aa3d6f14-7344-4e2c-a932-af286e9861b2-kube-api-access-5r4zv\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.137207 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-rrptn"] Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.216308 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.217643 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4zv\" (UniqueName: \"kubernetes.io/projected/aa3d6f14-7344-4e2c-a932-af286e9861b2-kube-api-access-5r4zv\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.217734 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-config\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.218308 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.218711 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-config\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.242359 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4zv\" (UniqueName: \"kubernetes.io/projected/aa3d6f14-7344-4e2c-a932-af286e9861b2-kube-api-access-5r4zv\") pod \"dnsmasq-dns-67d9f7fb89-rrptn\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.309186 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.313449 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-s5k6d"] Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.669619 5065 generic.go:334] "Generic (PLEG): container finished" podID="4c8e6b2f-f65e-4721-a624-89ae6ab2be85" containerID="6735857f7386ae6b99c12d84fdb68a1542fbceb3363fb82b0ef006b9530b81f5" exitCode=0 Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.669726 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-jtg49" event={"ID":"4c8e6b2f-f65e-4721-a624-89ae6ab2be85","Type":"ContainerDied","Data":"6735857f7386ae6b99c12d84fdb68a1542fbceb3363fb82b0ef006b9530b81f5"} Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.669935 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-jtg49" event={"ID":"4c8e6b2f-f65e-4721-a624-89ae6ab2be85","Type":"ContainerStarted","Data":"6856e8102c14926e257677e9e3650cce1791f9e24920509bd1928e798785b282"} Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.671169 5065 generic.go:334] "Generic (PLEG): container finished" podID="42105b65-e62a-47e6-9290-132f277aa57f" containerID="ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56" exitCode=0 Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.671235 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" event={"ID":"42105b65-e62a-47e6-9290-132f277aa57f","Type":"ContainerDied","Data":"ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56"} Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.671261 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" event={"ID":"42105b65-e62a-47e6-9290-132f277aa57f","Type":"ContainerStarted","Data":"cbc3d27b37aa613da31c6ea5272e265bd8250862914f3d02c28fdabf0fcdf9da"} Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.676660 5065 generic.go:334] "Generic (PLEG): container finished" podID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerID="080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c" exitCode=0 Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.676753 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb7qb" event={"ID":"1d471769-19ef-4399-8dd2-eb1d2f13faaa","Type":"ContainerDied","Data":"080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c"} Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.678161 5065 generic.go:334] "Generic (PLEG): container finished" podID="b3059e28-f84e-4cce-99e2-580e675c4580" containerID="1cf68ec9886ad1493c5a6523c9bef640740d87e0076972d8f0a430746431ba74" exitCode=0 Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.678185 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" event={"ID":"b3059e28-f84e-4cce-99e2-580e675c4580","Type":"ContainerDied","Data":"1cf68ec9886ad1493c5a6523c9bef640740d87e0076972d8f0a430746431ba74"} Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.678199 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" event={"ID":"b3059e28-f84e-4cce-99e2-580e675c4580","Type":"ContainerStarted","Data":"2ea9948131335c5b2b6d77edf5c423c3b9d5c9fd436973b3f7b000f17cd53259"} Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.767579 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-rrptn"] Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.780230 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.781820 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.786946 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.787030 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4npwx" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.787200 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.787318 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.787353 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.787317 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.787567 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.802748 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:40:14 crc kubenswrapper[5065]: E1008 14:40:14.896714 5065 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 08 14:40:14 crc kubenswrapper[5065]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/42105b65-e62a-47e6-9290-132f277aa57f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 08 14:40:14 crc kubenswrapper[5065]: > podSandboxID="cbc3d27b37aa613da31c6ea5272e265bd8250862914f3d02c28fdabf0fcdf9da" Oct 08 14:40:14 crc kubenswrapper[5065]: E1008 14:40:14.896869 5065 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 08 14:40:14 crc kubenswrapper[5065]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb6hc5h68h68h594h659hdbh679h65ch5f6hdch6h5b9h8fh55hfhf8h57fhc7h56ch687h669h559h678h5dhc7hf7h697h5d6h9ch669h54fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqqqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8b7964457-s5k6d_openstack(42105b65-e62a-47e6-9290-132f277aa57f): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/42105b65-e62a-47e6-9290-132f277aa57f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 08 14:40:14 crc kubenswrapper[5065]: > logger="UnhandledError" Oct 08 14:40:14 crc kubenswrapper[5065]: E1008 14:40:14.898299 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/42105b65-e62a-47e6-9290-132f277aa57f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" podUID="42105b65-e62a-47e6-9290-132f277aa57f" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925477 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85ce3937-b175-4380-8a3d-f24a319e67e0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925530 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925552 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925613 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925636 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925653 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85ce3937-b175-4380-8a3d-f24a319e67e0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925675 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925697 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7sht\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-kube-api-access-b7sht\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925714 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925758 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:14 crc kubenswrapper[5065]: I1008 14:40:14.925792 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.028979 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.029261 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85ce3937-b175-4380-8a3d-f24a319e67e0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.029401 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.029550 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.029754 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.029938 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.030086 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85ce3937-b175-4380-8a3d-f24a319e67e0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.030195 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.030298 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7sht\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-kube-api-access-b7sht\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.030385 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.030971 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.031383 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.031404 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.031990 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.032205 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.034694 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.035379 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.035854 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.036391 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85ce3937-b175-4380-8a3d-f24a319e67e0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.037604 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.037689 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/56fcdab089bca57706b366626fa0bf961f6dc474d430607fcd9bc885644ea46a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.039194 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85ce3937-b175-4380-8a3d-f24a319e67e0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.044511 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.046511 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7sht\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-kube-api-access-b7sht\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.069617 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.081178 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:15 crc kubenswrapper[5065]: I1008 14:40:15.109725 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.230932 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:40:16 crc kubenswrapper[5065]: E1008 14:40:15.231533 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3059e28-f84e-4cce-99e2-580e675c4580" containerName="init" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.231549 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3059e28-f84e-4cce-99e2-580e675c4580" containerName="init" Oct 08 14:40:16 crc kubenswrapper[5065]: E1008 14:40:15.231590 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8e6b2f-f65e-4721-a624-89ae6ab2be85" containerName="init" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.231597 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8e6b2f-f65e-4721-a624-89ae6ab2be85" containerName="init" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.231761 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3059e28-f84e-4cce-99e2-580e675c4580" containerName="init" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.231788 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8e6b2f-f65e-4721-a624-89ae6ab2be85" containerName="init" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.232550 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.235969 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.236233 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.236377 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.237487 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.237681 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.237937 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k2fgx" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.238121 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.238628 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg4d9\" (UniqueName: \"kubernetes.io/projected/b3059e28-f84e-4cce-99e2-580e675c4580-kube-api-access-qg4d9\") pod \"b3059e28-f84e-4cce-99e2-580e675c4580\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.238762 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-config\") pod \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.238828 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkdcm\" (UniqueName: \"kubernetes.io/projected/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-kube-api-access-qkdcm\") pod \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\" (UID: \"4c8e6b2f-f65e-4721-a624-89ae6ab2be85\") " Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.238882 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-dns-svc\") pod \"b3059e28-f84e-4cce-99e2-580e675c4580\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.238913 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-config\") pod \"b3059e28-f84e-4cce-99e2-580e675c4580\" (UID: \"b3059e28-f84e-4cce-99e2-580e675c4580\") " Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.242448 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.243912 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-kube-api-access-qkdcm" (OuterVolumeSpecName: "kube-api-access-qkdcm") pod "4c8e6b2f-f65e-4721-a624-89ae6ab2be85" (UID: "4c8e6b2f-f65e-4721-a624-89ae6ab2be85"). InnerVolumeSpecName "kube-api-access-qkdcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.258941 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3059e28-f84e-4cce-99e2-580e675c4580-kube-api-access-qg4d9" (OuterVolumeSpecName: "kube-api-access-qg4d9") pod "b3059e28-f84e-4cce-99e2-580e675c4580" (UID: "b3059e28-f84e-4cce-99e2-580e675c4580"). InnerVolumeSpecName "kube-api-access-qg4d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.263230 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-config" (OuterVolumeSpecName: "config") pod "b3059e28-f84e-4cce-99e2-580e675c4580" (UID: "b3059e28-f84e-4cce-99e2-580e675c4580"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.291180 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3059e28-f84e-4cce-99e2-580e675c4580" (UID: "b3059e28-f84e-4cce-99e2-580e675c4580"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.291918 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-config" (OuterVolumeSpecName: "config") pod "4c8e6b2f-f65e-4721-a624-89ae6ab2be85" (UID: "4c8e6b2f-f65e-4721-a624-89ae6ab2be85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.341713 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.341779 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-config-data\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.341819 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.341877 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.341935 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.341962 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a55db36a-c50d-4e84-b228-0c3d0b7d5578-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.341985 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a55db36a-c50d-4e84-b228-0c3d0b7d5578-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342009 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342067 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342093 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342131 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgz5d\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-kube-api-access-bgz5d\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342180 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342196 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkdcm\" (UniqueName: \"kubernetes.io/projected/4c8e6b2f-f65e-4721-a624-89ae6ab2be85-kube-api-access-qkdcm\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342209 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342238 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3059e28-f84e-4cce-99e2-580e675c4580-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.342251 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg4d9\" (UniqueName: \"kubernetes.io/projected/b3059e28-f84e-4cce-99e2-580e675c4580-kube-api-access-qg4d9\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443255 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443307 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443353 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgz5d\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-kube-api-access-bgz5d\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443376 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443457 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-config-data\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443497 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443570 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443655 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443708 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a55db36a-c50d-4e84-b228-0c3d0b7d5578-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443736 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a55db36a-c50d-4e84-b228-0c3d0b7d5578-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.443787 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.444878 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.445054 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.445532 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.445613 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-config-data\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.445692 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.449403 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.449484 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3e5e10768210a27f1f3a9f9d62e6c95645c1992a8b16917bab76178a8b75825/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.450098 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.450168 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a55db36a-c50d-4e84-b228-0c3d0b7d5578-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.452096 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.452217 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a55db36a-c50d-4e84-b228-0c3d0b7d5578-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.473864 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgz5d\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-kube-api-access-bgz5d\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.493009 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.556993 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.692972 5065 generic.go:334] "Generic (PLEG): container finished" podID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerID="40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2" exitCode=0 Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.693053 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" event={"ID":"aa3d6f14-7344-4e2c-a932-af286e9861b2","Type":"ContainerDied","Data":"40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.693308 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" event={"ID":"aa3d6f14-7344-4e2c-a932-af286e9861b2","Type":"ContainerStarted","Data":"16a0dfe20a9dcf73b6ed3d0d406f3cd6b1000fb9c43f611d410d801639997ac4"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.695946 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-jtg49" event={"ID":"4c8e6b2f-f65e-4721-a624-89ae6ab2be85","Type":"ContainerDied","Data":"6856e8102c14926e257677e9e3650cce1791f9e24920509bd1928e798785b282"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.695963 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-jtg49" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.696037 5065 scope.go:117] "RemoveContainer" containerID="6735857f7386ae6b99c12d84fdb68a1542fbceb3363fb82b0ef006b9530b81f5" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.700357 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.702121 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-h76rl" event={"ID":"b3059e28-f84e-4cce-99e2-580e675c4580","Type":"ContainerDied","Data":"2ea9948131335c5b2b6d77edf5c423c3b9d5c9fd436973b3f7b000f17cd53259"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.736511 5065 scope.go:117] "RemoveContainer" containerID="1cf68ec9886ad1493c5a6523c9bef640740d87e0076972d8f0a430746431ba74" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.782293 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-jtg49"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.790572 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-jtg49"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.811992 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-h76rl"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:15.815592 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-h76rl"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.275162 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.276566 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.278682 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5tnwk" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.278866 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.278975 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.279178 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.279494 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.284356 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.302974 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.462702 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-secrets\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.462754 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.462777 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.462802 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92c8e8bf-929a-41a7-9184-d416a49abd5c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.462838 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-kolla-config\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.462980 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt897\" (UniqueName: \"kubernetes.io/projected/92c8e8bf-929a-41a7-9184-d416a49abd5c-kube-api-access-pt897\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.463024 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.463079 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-config-data-default\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.463102 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.491180 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:40:16 crc kubenswrapper[5065]: W1008 14:40:16.499759 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85ce3937_b175_4380_8a3d_f24a319e67e0.slice/crio-28d125bf59c6cf2ed4999fa7f15c74077dc344788dc01bf248b1c6414e437385 WatchSource:0}: Error finding container 28d125bf59c6cf2ed4999fa7f15c74077dc344788dc01bf248b1c6414e437385: Status 404 returned error can't find the container with id 28d125bf59c6cf2ed4999fa7f15c74077dc344788dc01bf248b1c6414e437385 Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.507959 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564352 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-secrets\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564431 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564459 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564496 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92c8e8bf-929a-41a7-9184-d416a49abd5c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564540 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-kolla-config\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564569 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt897\" (UniqueName: \"kubernetes.io/projected/92c8e8bf-929a-41a7-9184-d416a49abd5c-kube-api-access-pt897\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564592 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564621 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-config-data-default\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.564641 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.567320 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-kolla-config\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.568592 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.569171 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92c8e8bf-929a-41a7-9184-d416a49abd5c-config-data-default\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.570964 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92c8e8bf-929a-41a7-9184-d416a49abd5c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.577850 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.577937 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-secrets\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.581292 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.581397 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5e811dc96805cb849f70ff0df44e275f4ff913924f2a92bf111b0154eab89ada/globalmount\"" pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.581962 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c8e8bf-929a-41a7-9184-d416a49abd5c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.587625 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt897\" (UniqueName: \"kubernetes.io/projected/92c8e8bf-929a-41a7-9184-d416a49abd5c-kube-api-access-pt897\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.632650 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8620f0e-8388-4bf5-8241-736b4ab13da0\") pod \"openstack-galera-0\" (UID: \"92c8e8bf-929a-41a7-9184-d416a49abd5c\") " pod="openstack/openstack-galera-0" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.708496 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" event={"ID":"aa3d6f14-7344-4e2c-a932-af286e9861b2","Type":"ContainerStarted","Data":"cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.708598 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.716903 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" event={"ID":"42105b65-e62a-47e6-9290-132f277aa57f","Type":"ContainerStarted","Data":"6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.717648 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.719619 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a55db36a-c50d-4e84-b228-0c3d0b7d5578","Type":"ContainerStarted","Data":"94dda64939c187dea4f9eb256287d86175ad1b6355019468cfde69d3b4f5fe59"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.725502 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb7qb" event={"ID":"1d471769-19ef-4399-8dd2-eb1d2f13faaa","Type":"ContainerStarted","Data":"a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.727210 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85ce3937-b175-4380-8a3d-f24a319e67e0","Type":"ContainerStarted","Data":"28d125bf59c6cf2ed4999fa7f15c74077dc344788dc01bf248b1c6414e437385"} Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.729226 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" podStartSLOduration=2.729206319 podStartE2EDuration="2.729206319s" podCreationTimestamp="2025-10-08 14:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:16.723848533 +0000 UTC m=+4918.501230290" watchObservedRunningTime="2025-10-08 14:40:16.729206319 +0000 UTC m=+4918.506588076" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.747258 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" podStartSLOduration=3.74723717 podStartE2EDuration="3.74723717s" podCreationTimestamp="2025-10-08 14:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:16.739952441 +0000 UTC m=+4918.517334218" watchObservedRunningTime="2025-10-08 14:40:16.74723717 +0000 UTC m=+4918.524618947" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.757677 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kb7qb" podStartSLOduration=2.580633237 podStartE2EDuration="5.757655073s" podCreationTimestamp="2025-10-08 14:40:11 +0000 UTC" firstStartedPulling="2025-10-08 14:40:12.657308727 +0000 UTC m=+4914.434690484" lastFinishedPulling="2025-10-08 14:40:15.834330563 +0000 UTC m=+4917.611712320" observedRunningTime="2025-10-08 14:40:16.75570162 +0000 UTC m=+4918.533083387" watchObservedRunningTime="2025-10-08 14:40:16.757655073 +0000 UTC m=+4918.535036830" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.885672 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8e6b2f-f65e-4721-a624-89ae6ab2be85" path="/var/lib/kubelet/pods/4c8e6b2f-f65e-4721-a624-89ae6ab2be85/volumes" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.886340 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3059e28-f84e-4cce-99e2-580e675c4580" path="/var/lib/kubelet/pods/b3059e28-f84e-4cce-99e2-580e675c4580/volumes" Oct 08 14:40:16 crc kubenswrapper[5065]: I1008 14:40:16.892106 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: W1008 14:40:17.362913 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c8e8bf_929a_41a7_9184_d416a49abd5c.slice/crio-e7b7264eb19fe01e3201d2cd762a723480d212e65be42a527093e1fdf59103b9 WatchSource:0}: Error finding container e7b7264eb19fe01e3201d2cd762a723480d212e65be42a527093e1fdf59103b9: Status 404 returned error can't find the container with id e7b7264eb19fe01e3201d2cd762a723480d212e65be42a527093e1fdf59103b9 Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.365799 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.632826 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.634252 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.639271 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9w494" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.639353 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.639452 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.639604 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.658868 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.736460 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92c8e8bf-929a-41a7-9184-d416a49abd5c","Type":"ContainerStarted","Data":"e7b7264eb19fe01e3201d2cd762a723480d212e65be42a527093e1fdf59103b9"} Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.739359 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85ce3937-b175-4380-8a3d-f24a319e67e0","Type":"ContainerStarted","Data":"6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d"} Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782144 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782249 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782320 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782367 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782462 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782747 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvqsh\" (UniqueName: \"kubernetes.io/projected/924dec42-f6c3-4827-9944-b654fd9268ee-kube-api-access-vvqsh\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782841 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782915 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.782949 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/924dec42-f6c3-4827-9944-b654fd9268ee-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.884350 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.884718 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.884787 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.884822 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.884928 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvqsh\" (UniqueName: \"kubernetes.io/projected/924dec42-f6c3-4827-9944-b654fd9268ee-kube-api-access-vvqsh\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.884978 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.885025 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.885055 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/924dec42-f6c3-4827-9944-b654fd9268ee-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.885105 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.886502 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.888231 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.889117 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/924dec42-f6c3-4827-9944-b654fd9268ee-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.889791 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/924dec42-f6c3-4827-9944-b654fd9268ee-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.890266 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.890968 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.894411 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/924dec42-f6c3-4827-9944-b654fd9268ee-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.911259 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvqsh\" (UniqueName: \"kubernetes.io/projected/924dec42-f6c3-4827-9944-b654fd9268ee-kube-api-access-vvqsh\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.913238 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.913278 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/29aa69e7afcd399540cd457b3868d3413ab56cedcdfeb4d7af74af3f853ab0f1/globalmount\"" pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:17 crc kubenswrapper[5065]: I1008 14:40:17.941134 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d86a72a1-834a-4cc8-af6f-ef7cef473bb3\") pod \"openstack-cell1-galera-0\" (UID: \"924dec42-f6c3-4827-9944-b654fd9268ee\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.038199 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.039258 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.040951 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.041219 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-j7hmm" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.041300 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.055966 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.141251 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.190072 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3557dec-b0d7-43ef-b8e2-eab685138bc6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.190183 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6f5d\" (UniqueName: \"kubernetes.io/projected/c3557dec-b0d7-43ef-b8e2-eab685138bc6-kube-api-access-t6f5d\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.190260 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3557dec-b0d7-43ef-b8e2-eab685138bc6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.190287 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c3557dec-b0d7-43ef-b8e2-eab685138bc6-kolla-config\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.190309 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3557dec-b0d7-43ef-b8e2-eab685138bc6-config-data\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.292690 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6f5d\" (UniqueName: \"kubernetes.io/projected/c3557dec-b0d7-43ef-b8e2-eab685138bc6-kube-api-access-t6f5d\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.293052 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3557dec-b0d7-43ef-b8e2-eab685138bc6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.293083 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c3557dec-b0d7-43ef-b8e2-eab685138bc6-kolla-config\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.293110 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3557dec-b0d7-43ef-b8e2-eab685138bc6-config-data\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.293203 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3557dec-b0d7-43ef-b8e2-eab685138bc6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.294166 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c3557dec-b0d7-43ef-b8e2-eab685138bc6-kolla-config\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.294370 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3557dec-b0d7-43ef-b8e2-eab685138bc6-config-data\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.302852 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3557dec-b0d7-43ef-b8e2-eab685138bc6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.310909 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6f5d\" (UniqueName: \"kubernetes.io/projected/c3557dec-b0d7-43ef-b8e2-eab685138bc6-kube-api-access-t6f5d\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.312426 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3557dec-b0d7-43ef-b8e2-eab685138bc6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c3557dec-b0d7-43ef-b8e2-eab685138bc6\") " pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.358155 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.625092 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.639901 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.765045 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a55db36a-c50d-4e84-b228-0c3d0b7d5578","Type":"ContainerStarted","Data":"ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc"} Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.770977 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92c8e8bf-929a-41a7-9184-d416a49abd5c","Type":"ContainerStarted","Data":"3d5d18b616acd9ce7f40aa92bb190b144c602c8d999eeaacd4eeac03c3872501"} Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.775539 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"924dec42-f6c3-4827-9944-b654fd9268ee","Type":"ContainerStarted","Data":"31ca5b8d5e7bdc093052ab43f95596e2cec492c56dbbbce19aa2cb8397b5ad3e"} Oct 08 14:40:18 crc kubenswrapper[5065]: I1008 14:40:18.779253 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c3557dec-b0d7-43ef-b8e2-eab685138bc6","Type":"ContainerStarted","Data":"6fe2714ab7f3ce218ff17887bfb943b7a2d8879fdd020eb395a4d18dde192fcb"} Oct 08 14:40:19 crc kubenswrapper[5065]: I1008 14:40:19.788834 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"924dec42-f6c3-4827-9944-b654fd9268ee","Type":"ContainerStarted","Data":"affde084a05cec8eaee24728cf929478c0767ed5c10ee73101082d67376c244d"} Oct 08 14:40:19 crc kubenswrapper[5065]: I1008 14:40:19.790379 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c3557dec-b0d7-43ef-b8e2-eab685138bc6","Type":"ContainerStarted","Data":"3c425926083daf080b514d086c5287940ab05cbfebc593050dcf472d133c889f"} Oct 08 14:40:19 crc kubenswrapper[5065]: I1008 14:40:19.850065 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.850042816 podStartE2EDuration="1.850042816s" podCreationTimestamp="2025-10-08 14:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:19.844133025 +0000 UTC m=+4921.621514782" watchObservedRunningTime="2025-10-08 14:40:19.850042816 +0000 UTC m=+4921.627424583" Oct 08 14:40:20 crc kubenswrapper[5065]: I1008 14:40:20.797664 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 08 14:40:21 crc kubenswrapper[5065]: I1008 14:40:21.595487 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:21 crc kubenswrapper[5065]: I1008 14:40:21.595542 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:21 crc kubenswrapper[5065]: I1008 14:40:21.679499 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:21 crc kubenswrapper[5065]: I1008 14:40:21.860951 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:21 crc kubenswrapper[5065]: I1008 14:40:21.922957 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kb7qb"] Oct 08 14:40:23 crc kubenswrapper[5065]: I1008 14:40:23.359164 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 08 14:40:23 crc kubenswrapper[5065]: I1008 14:40:23.822434 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kb7qb" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="registry-server" containerID="cri-o://a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa" gracePeriod=2 Oct 08 14:40:23 crc kubenswrapper[5065]: I1008 14:40:23.951346 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.278297 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.310607 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.362702 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-s5k6d"] Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.375240 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.375573 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.404753 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-utilities\") pod \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.405115 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gd5q\" (UniqueName: \"kubernetes.io/projected/1d471769-19ef-4399-8dd2-eb1d2f13faaa-kube-api-access-7gd5q\") pod \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.405353 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-catalog-content\") pod \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\" (UID: \"1d471769-19ef-4399-8dd2-eb1d2f13faaa\") " Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.406392 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-utilities" (OuterVolumeSpecName: "utilities") pod "1d471769-19ef-4399-8dd2-eb1d2f13faaa" (UID: "1d471769-19ef-4399-8dd2-eb1d2f13faaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.407314 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.414679 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d471769-19ef-4399-8dd2-eb1d2f13faaa-kube-api-access-7gd5q" (OuterVolumeSpecName: "kube-api-access-7gd5q") pod "1d471769-19ef-4399-8dd2-eb1d2f13faaa" (UID: "1d471769-19ef-4399-8dd2-eb1d2f13faaa"). InnerVolumeSpecName "kube-api-access-7gd5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.502242 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d471769-19ef-4399-8dd2-eb1d2f13faaa" (UID: "1d471769-19ef-4399-8dd2-eb1d2f13faaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.508936 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gd5q\" (UniqueName: \"kubernetes.io/projected/1d471769-19ef-4399-8dd2-eb1d2f13faaa-kube-api-access-7gd5q\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.508973 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d471769-19ef-4399-8dd2-eb1d2f13faaa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.842344 5065 generic.go:334] "Generic (PLEG): container finished" podID="92c8e8bf-929a-41a7-9184-d416a49abd5c" containerID="3d5d18b616acd9ce7f40aa92bb190b144c602c8d999eeaacd4eeac03c3872501" exitCode=0 Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.842466 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92c8e8bf-929a-41a7-9184-d416a49abd5c","Type":"ContainerDied","Data":"3d5d18b616acd9ce7f40aa92bb190b144c602c8d999eeaacd4eeac03c3872501"} Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.846241 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.846269 5065 generic.go:334] "Generic (PLEG): container finished" podID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerID="a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa" exitCode=0 Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.846320 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb7qb" event={"ID":"1d471769-19ef-4399-8dd2-eb1d2f13faaa","Type":"ContainerDied","Data":"a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa"} Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.846558 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb7qb" event={"ID":"1d471769-19ef-4399-8dd2-eb1d2f13faaa","Type":"ContainerDied","Data":"d745f09eb60407bacdf285f909daae28ff3394cbc2f5e9d3e90d83e8e0d5f8cd"} Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.846643 5065 scope.go:117] "RemoveContainer" containerID="a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa" Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.851933 5065 generic.go:334] "Generic (PLEG): container finished" podID="924dec42-f6c3-4827-9944-b654fd9268ee" containerID="affde084a05cec8eaee24728cf929478c0767ed5c10ee73101082d67376c244d" exitCode=0 Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.852036 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"924dec42-f6c3-4827-9944-b654fd9268ee","Type":"ContainerDied","Data":"affde084a05cec8eaee24728cf929478c0767ed5c10ee73101082d67376c244d"} Oct 08 14:40:24 crc kubenswrapper[5065]: I1008 14:40:24.852297 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" podUID="42105b65-e62a-47e6-9290-132f277aa57f" containerName="dnsmasq-dns" containerID="cri-o://6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a" gracePeriod=10 Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.016365 5065 scope.go:117] "RemoveContainer" containerID="080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.063063 5065 scope.go:117] "RemoveContainer" containerID="c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.099987 5065 scope.go:117] "RemoveContainer" containerID="a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa" Oct 08 14:40:25 crc kubenswrapper[5065]: E1008 14:40:25.101222 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa\": container with ID starting with a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa not found: ID does not exist" containerID="a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.101256 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa"} err="failed to get container status \"a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa\": rpc error: code = NotFound desc = could not find container \"a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa\": container with ID starting with a0847e476ae7e585da9b824d1f572d55d8e0ad58f772fa34076590f4a9ccbbaa not found: ID does not exist" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.101305 5065 scope.go:117] "RemoveContainer" containerID="080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c" Oct 08 14:40:25 crc kubenswrapper[5065]: E1008 14:40:25.101717 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c\": container with ID starting with 080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c not found: ID does not exist" containerID="080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.101771 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c"} err="failed to get container status \"080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c\": rpc error: code = NotFound desc = could not find container \"080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c\": container with ID starting with 080fb1976d07e2e66edf16bdec81137f2de692cf4561079dbbfa40d4ff749c5c not found: ID does not exist" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.101805 5065 scope.go:117] "RemoveContainer" containerID="c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f" Oct 08 14:40:25 crc kubenswrapper[5065]: E1008 14:40:25.102260 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f\": container with ID starting with c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f not found: ID does not exist" containerID="c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.102309 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f"} err="failed to get container status \"c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f\": rpc error: code = NotFound desc = could not find container \"c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f\": container with ID starting with c4190ad808ecaa862b8c8b713ffd42b2524b6b184bfc0d41e9e142cb9f2f0f5f not found: ID does not exist" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.297103 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.422174 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqqqq\" (UniqueName: \"kubernetes.io/projected/42105b65-e62a-47e6-9290-132f277aa57f-kube-api-access-mqqqq\") pod \"42105b65-e62a-47e6-9290-132f277aa57f\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.422323 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-dns-svc\") pod \"42105b65-e62a-47e6-9290-132f277aa57f\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.422470 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-config\") pod \"42105b65-e62a-47e6-9290-132f277aa57f\" (UID: \"42105b65-e62a-47e6-9290-132f277aa57f\") " Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.428266 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42105b65-e62a-47e6-9290-132f277aa57f-kube-api-access-mqqqq" (OuterVolumeSpecName: "kube-api-access-mqqqq") pod "42105b65-e62a-47e6-9290-132f277aa57f" (UID: "42105b65-e62a-47e6-9290-132f277aa57f"). InnerVolumeSpecName "kube-api-access-mqqqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.458351 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-config" (OuterVolumeSpecName: "config") pod "42105b65-e62a-47e6-9290-132f277aa57f" (UID: "42105b65-e62a-47e6-9290-132f277aa57f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.459248 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42105b65-e62a-47e6-9290-132f277aa57f" (UID: "42105b65-e62a-47e6-9290-132f277aa57f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.523605 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqqqq\" (UniqueName: \"kubernetes.io/projected/42105b65-e62a-47e6-9290-132f277aa57f-kube-api-access-mqqqq\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.523640 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.523656 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42105b65-e62a-47e6-9290-132f277aa57f-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.866019 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"92c8e8bf-929a-41a7-9184-d416a49abd5c","Type":"ContainerStarted","Data":"20d50a4b5b5e3d8f5b51670925e24305ff55642618ba4f053643c2b91f9cc3fa"} Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.869490 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"924dec42-f6c3-4827-9944-b654fd9268ee","Type":"ContainerStarted","Data":"20929c029fefd086c9cb08e59cc1950b799d5aa631a6dd0621069828af833d3c"} Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.875599 5065 generic.go:334] "Generic (PLEG): container finished" podID="42105b65-e62a-47e6-9290-132f277aa57f" containerID="6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a" exitCode=0 Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.875653 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" event={"ID":"42105b65-e62a-47e6-9290-132f277aa57f","Type":"ContainerDied","Data":"6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a"} Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.875684 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" event={"ID":"42105b65-e62a-47e6-9290-132f277aa57f","Type":"ContainerDied","Data":"cbc3d27b37aa613da31c6ea5272e265bd8250862914f3d02c28fdabf0fcdf9da"} Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.875744 5065 scope.go:117] "RemoveContainer" containerID="6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.875729 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b7964457-s5k6d" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.901282 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.901247682 podStartE2EDuration="10.901247682s" podCreationTimestamp="2025-10-08 14:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:25.897356576 +0000 UTC m=+4927.674738373" watchObservedRunningTime="2025-10-08 14:40:25.901247682 +0000 UTC m=+4927.678629519" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.912264 5065 scope.go:117] "RemoveContainer" containerID="ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.932220 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.932190795 podStartE2EDuration="9.932190795s" podCreationTimestamp="2025-10-08 14:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:25.929755528 +0000 UTC m=+4927.707137375" watchObservedRunningTime="2025-10-08 14:40:25.932190795 +0000 UTC m=+4927.709572582" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.958714 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-s5k6d"] Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.964433 5065 scope.go:117] "RemoveContainer" containerID="6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a" Oct 08 14:40:25 crc kubenswrapper[5065]: E1008 14:40:25.964998 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a\": container with ID starting with 6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a not found: ID does not exist" containerID="6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.965056 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a"} err="failed to get container status \"6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a\": rpc error: code = NotFound desc = could not find container \"6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a\": container with ID starting with 6612c1a35d017589d7162a9c25778345caf21726d0d4f5635e41d0f2d744d14a not found: ID does not exist" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.965082 5065 scope.go:117] "RemoveContainer" containerID="ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56" Oct 08 14:40:25 crc kubenswrapper[5065]: E1008 14:40:25.965448 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56\": container with ID starting with ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56 not found: ID does not exist" containerID="ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.965474 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56"} err="failed to get container status \"ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56\": rpc error: code = NotFound desc = could not find container \"ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56\": container with ID starting with ed0ec689bdd05f668850b64b7205558b0eefbbe8aa4cc6d934c8c89ed48dfc56 not found: ID does not exist" Oct 08 14:40:25 crc kubenswrapper[5065]: I1008 14:40:25.965567 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b7964457-s5k6d"] Oct 08 14:40:26 crc kubenswrapper[5065]: I1008 14:40:26.907106 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42105b65-e62a-47e6-9290-132f277aa57f" path="/var/lib/kubelet/pods/42105b65-e62a-47e6-9290-132f277aa57f/volumes" Oct 08 14:40:26 crc kubenswrapper[5065]: I1008 14:40:26.908118 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 08 14:40:26 crc kubenswrapper[5065]: I1008 14:40:26.908174 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 08 14:40:28 crc kubenswrapper[5065]: I1008 14:40:28.141883 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:28 crc kubenswrapper[5065]: I1008 14:40:28.141940 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:30 crc kubenswrapper[5065]: I1008 14:40:30.971257 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 08 14:40:31 crc kubenswrapper[5065]: I1008 14:40:31.051790 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 08 14:40:32 crc kubenswrapper[5065]: I1008 14:40:32.226490 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:32 crc kubenswrapper[5065]: I1008 14:40:32.302362 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 08 14:40:50 crc kubenswrapper[5065]: I1008 14:40:50.101976 5065 generic.go:334] "Generic (PLEG): container finished" podID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerID="ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc" exitCode=0 Oct 08 14:40:50 crc kubenswrapper[5065]: I1008 14:40:50.102092 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a55db36a-c50d-4e84-b228-0c3d0b7d5578","Type":"ContainerDied","Data":"ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc"} Oct 08 14:40:50 crc kubenswrapper[5065]: I1008 14:40:50.105758 5065 generic.go:334] "Generic (PLEG): container finished" podID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerID="6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d" exitCode=0 Oct 08 14:40:50 crc kubenswrapper[5065]: I1008 14:40:50.105798 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85ce3937-b175-4380-8a3d-f24a319e67e0","Type":"ContainerDied","Data":"6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d"} Oct 08 14:40:51 crc kubenswrapper[5065]: I1008 14:40:51.114594 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a55db36a-c50d-4e84-b228-0c3d0b7d5578","Type":"ContainerStarted","Data":"ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b"} Oct 08 14:40:51 crc kubenswrapper[5065]: I1008 14:40:51.115148 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 14:40:51 crc kubenswrapper[5065]: I1008 14:40:51.118754 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85ce3937-b175-4380-8a3d-f24a319e67e0","Type":"ContainerStarted","Data":"48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2"} Oct 08 14:40:51 crc kubenswrapper[5065]: I1008 14:40:51.119062 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:51 crc kubenswrapper[5065]: I1008 14:40:51.161597 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.161577908 podStartE2EDuration="37.161577908s" podCreationTimestamp="2025-10-08 14:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:51.156984853 +0000 UTC m=+4952.934366630" watchObservedRunningTime="2025-10-08 14:40:51.161577908 +0000 UTC m=+4952.938959665" Oct 08 14:40:51 crc kubenswrapper[5065]: I1008 14:40:51.192357 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.192336056 podStartE2EDuration="38.192336056s" podCreationTimestamp="2025-10-08 14:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:51.185275544 +0000 UTC m=+4952.962657311" watchObservedRunningTime="2025-10-08 14:40:51.192336056 +0000 UTC m=+4952.969717823" Oct 08 14:40:54 crc kubenswrapper[5065]: I1008 14:40:54.375692 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:40:54 crc kubenswrapper[5065]: I1008 14:40:54.376126 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:40:54 crc kubenswrapper[5065]: I1008 14:40:54.376170 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:40:54 crc kubenswrapper[5065]: I1008 14:40:54.376744 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:40:54 crc kubenswrapper[5065]: I1008 14:40:54.376975 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" gracePeriod=600 Oct 08 14:40:54 crc kubenswrapper[5065]: E1008 14:40:54.505547 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.001790 5065 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod1d471769-19ef-4399-8dd2-eb1d2f13faaa"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1d471769-19ef-4399-8dd2-eb1d2f13faaa] : Timed out while waiting for systemd to remove kubepods-burstable-pod1d471769_19ef_4399_8dd2_eb1d2f13faaa.slice" Oct 08 14:40:55 crc kubenswrapper[5065]: E1008 14:40:55.001838 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod1d471769-19ef-4399-8dd2-eb1d2f13faaa] : unable to destroy cgroup paths for cgroup [kubepods burstable pod1d471769-19ef-4399-8dd2-eb1d2f13faaa] : Timed out while waiting for systemd to remove kubepods-burstable-pod1d471769_19ef_4399_8dd2_eb1d2f13faaa.slice" pod="openshift-marketplace/redhat-operators-kb7qb" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.150584 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" exitCode=0 Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.150673 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb7qb" Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.150669 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0"} Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.150747 5065 scope.go:117] "RemoveContainer" containerID="3d3af3101a610eee407e4cc35cf95788dd5358092821493201c93dcb1ae095eb" Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.151512 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:40:55 crc kubenswrapper[5065]: E1008 14:40:55.151839 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.223673 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kb7qb"] Oct 08 14:40:55 crc kubenswrapper[5065]: I1008 14:40:55.238710 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kb7qb"] Oct 08 14:40:56 crc kubenswrapper[5065]: I1008 14:40:56.883798 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" path="/var/lib/kubelet/pods/1d471769-19ef-4399-8dd2-eb1d2f13faaa/volumes" Oct 08 14:41:05 crc kubenswrapper[5065]: I1008 14:41:05.115684 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:05 crc kubenswrapper[5065]: I1008 14:41:05.559609 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 14:41:05 crc kubenswrapper[5065]: I1008 14:41:05.873955 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:41:05 crc kubenswrapper[5065]: E1008 14:41:05.874325 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.172326 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-6rk7g"] Oct 08 14:41:14 crc kubenswrapper[5065]: E1008 14:41:14.173317 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="extract-content" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.173334 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="extract-content" Oct 08 14:41:14 crc kubenswrapper[5065]: E1008 14:41:14.173351 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="extract-utilities" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.173362 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="extract-utilities" Oct 08 14:41:14 crc kubenswrapper[5065]: E1008 14:41:14.173386 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="registry-server" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.173395 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="registry-server" Oct 08 14:41:14 crc kubenswrapper[5065]: E1008 14:41:14.173410 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42105b65-e62a-47e6-9290-132f277aa57f" containerName="dnsmasq-dns" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.173434 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="42105b65-e62a-47e6-9290-132f277aa57f" containerName="dnsmasq-dns" Oct 08 14:41:14 crc kubenswrapper[5065]: E1008 14:41:14.173463 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42105b65-e62a-47e6-9290-132f277aa57f" containerName="init" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.173472 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="42105b65-e62a-47e6-9290-132f277aa57f" containerName="init" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.173656 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d471769-19ef-4399-8dd2-eb1d2f13faaa" containerName="registry-server" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.173687 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="42105b65-e62a-47e6-9290-132f277aa57f" containerName="dnsmasq-dns" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.174644 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.191590 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-6rk7g"] Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.287005 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9acc6a9-cad9-42ff-9832-b305696f1785-config\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.287079 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9acc6a9-cad9-42ff-9832-b305696f1785-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.287250 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrk5d\" (UniqueName: \"kubernetes.io/projected/e9acc6a9-cad9-42ff-9832-b305696f1785-kube-api-access-jrk5d\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.389051 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9acc6a9-cad9-42ff-9832-b305696f1785-config\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.389331 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9acc6a9-cad9-42ff-9832-b305696f1785-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.389471 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrk5d\" (UniqueName: \"kubernetes.io/projected/e9acc6a9-cad9-42ff-9832-b305696f1785-kube-api-access-jrk5d\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.390136 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9acc6a9-cad9-42ff-9832-b305696f1785-config\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.391190 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9acc6a9-cad9-42ff-9832-b305696f1785-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.418808 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrk5d\" (UniqueName: \"kubernetes.io/projected/e9acc6a9-cad9-42ff-9832-b305696f1785-kube-api-access-jrk5d\") pod \"dnsmasq-dns-5fdc957c47-6rk7g\" (UID: \"e9acc6a9-cad9-42ff-9832-b305696f1785\") " pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.507646 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.778259 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:41:14 crc kubenswrapper[5065]: I1008 14:41:14.948326 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-6rk7g"] Oct 08 14:41:15 crc kubenswrapper[5065]: I1008 14:41:15.369341 5065 generic.go:334] "Generic (PLEG): container finished" podID="e9acc6a9-cad9-42ff-9832-b305696f1785" containerID="0b0a9c1a58d1d6efc4f04ed72f4d2f34e3d67fda1c4c0efc995971117cdcd724" exitCode=0 Oct 08 14:41:15 crc kubenswrapper[5065]: I1008 14:41:15.369465 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" event={"ID":"e9acc6a9-cad9-42ff-9832-b305696f1785","Type":"ContainerDied","Data":"0b0a9c1a58d1d6efc4f04ed72f4d2f34e3d67fda1c4c0efc995971117cdcd724"} Oct 08 14:41:15 crc kubenswrapper[5065]: I1008 14:41:15.369680 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" event={"ID":"e9acc6a9-cad9-42ff-9832-b305696f1785","Type":"ContainerStarted","Data":"98c9b1e7238846790ea5661c3a7f0ad492a3c3a5a91887725c9d4b89bf602371"} Oct 08 14:41:15 crc kubenswrapper[5065]: I1008 14:41:15.560579 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:41:16 crc kubenswrapper[5065]: I1008 14:41:16.378922 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" event={"ID":"e9acc6a9-cad9-42ff-9832-b305696f1785","Type":"ContainerStarted","Data":"664f1f00a1e173b08f6cb1f497f2f2d1453888c5f29d853c471de5a0a5204401"} Oct 08 14:41:16 crc kubenswrapper[5065]: I1008 14:41:16.379325 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:16 crc kubenswrapper[5065]: I1008 14:41:16.398125 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" podStartSLOduration=2.398100849 podStartE2EDuration="2.398100849s" podCreationTimestamp="2025-10-08 14:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:16.395766126 +0000 UTC m=+4978.173147903" watchObservedRunningTime="2025-10-08 14:41:16.398100849 +0000 UTC m=+4978.175482606" Oct 08 14:41:16 crc kubenswrapper[5065]: I1008 14:41:16.874008 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:41:16 crc kubenswrapper[5065]: E1008 14:41:16.874270 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:41:18 crc kubenswrapper[5065]: I1008 14:41:18.513663 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerName="rabbitmq" containerID="cri-o://ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b" gracePeriod=604797 Oct 08 14:41:19 crc kubenswrapper[5065]: I1008 14:41:19.483118 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerName="rabbitmq" containerID="cri-o://48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2" gracePeriod=604797 Oct 08 14:41:24 crc kubenswrapper[5065]: I1008 14:41:24.509788 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fdc957c47-6rk7g" Oct 08 14:41:24 crc kubenswrapper[5065]: I1008 14:41:24.582202 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-rrptn"] Oct 08 14:41:24 crc kubenswrapper[5065]: I1008 14:41:24.582564 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" podUID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerName="dnsmasq-dns" containerID="cri-o://cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8" gracePeriod=10 Oct 08 14:41:24 crc kubenswrapper[5065]: E1008 14:41:24.785249 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55db36a_c50d_4e84_b228_0c3d0b7d5578.slice/crio-ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b.scope\": RecentStats: unable to find data in memory cache]" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.113032 5065 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.246:5671: connect: connection refused" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.125079 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.237191 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269593 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-server-conf\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269636 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-config\") pod \"aa3d6f14-7344-4e2c-a932-af286e9861b2\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269668 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgz5d\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-kube-api-access-bgz5d\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269694 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r4zv\" (UniqueName: \"kubernetes.io/projected/aa3d6f14-7344-4e2c-a932-af286e9861b2-kube-api-access-5r4zv\") pod \"aa3d6f14-7344-4e2c-a932-af286e9861b2\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269719 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-dns-svc\") pod \"aa3d6f14-7344-4e2c-a932-af286e9861b2\" (UID: \"aa3d6f14-7344-4e2c-a932-af286e9861b2\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269737 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a55db36a-c50d-4e84-b228-0c3d0b7d5578-pod-info\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269857 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269881 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a55db36a-c50d-4e84-b228-0c3d0b7d5578-erlang-cookie-secret\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269906 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-tls\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269928 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-confd\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269957 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-plugins\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.269972 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-config-data\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.270028 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-erlang-cookie\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.270042 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-plugins-conf\") pod \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\" (UID: \"a55db36a-c50d-4e84-b228-0c3d0b7d5578\") " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.271978 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.273832 5065 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.281446 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.282766 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.285762 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55db36a-c50d-4e84-b228-0c3d0b7d5578-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.297089 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3d6f14-7344-4e2c-a932-af286e9861b2-kube-api-access-5r4zv" (OuterVolumeSpecName: "kube-api-access-5r4zv") pod "aa3d6f14-7344-4e2c-a932-af286e9861b2" (UID: "aa3d6f14-7344-4e2c-a932-af286e9861b2"). InnerVolumeSpecName "kube-api-access-5r4zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.298483 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-kube-api-access-bgz5d" (OuterVolumeSpecName: "kube-api-access-bgz5d") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "kube-api-access-bgz5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.307809 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34" (OuterVolumeSpecName: "persistence") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "pvc-0e0b9e43-921e-4a63-8037-4105da53ad34". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.308093 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.308189 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a55db36a-c50d-4e84-b228-0c3d0b7d5578-pod-info" (OuterVolumeSpecName: "pod-info") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.317204 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-config-data" (OuterVolumeSpecName: "config-data") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.333308 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-server-conf" (OuterVolumeSpecName: "server-conf") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.337449 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-config" (OuterVolumeSpecName: "config") pod "aa3d6f14-7344-4e2c-a932-af286e9861b2" (UID: "aa3d6f14-7344-4e2c-a932-af286e9861b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.350432 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa3d6f14-7344-4e2c-a932-af286e9861b2" (UID: "aa3d6f14-7344-4e2c-a932-af286e9861b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.372696 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a55db36a-c50d-4e84-b228-0c3d0b7d5578" (UID: "a55db36a-c50d-4e84-b228-0c3d0b7d5578"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377397 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r4zv\" (UniqueName: \"kubernetes.io/projected/aa3d6f14-7344-4e2c-a932-af286e9861b2-kube-api-access-5r4zv\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377454 5065 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377473 5065 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a55db36a-c50d-4e84-b228-0c3d0b7d5578-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377508 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") on node \"crc\" " Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377527 5065 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a55db36a-c50d-4e84-b228-0c3d0b7d5578-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377539 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377548 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377557 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377569 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377579 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a55db36a-c50d-4e84-b228-0c3d0b7d5578-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377592 5065 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a55db36a-c50d-4e84-b228-0c3d0b7d5578-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377604 5065 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3d6f14-7344-4e2c-a932-af286e9861b2-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.377620 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgz5d\" (UniqueName: \"kubernetes.io/projected/a55db36a-c50d-4e84-b228-0c3d0b7d5578-kube-api-access-bgz5d\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.398340 5065 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.398597 5065 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e0b9e43-921e-4a63-8037-4105da53ad34" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34") on node "crc" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.457778 5065 generic.go:334] "Generic (PLEG): container finished" podID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerID="cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8" exitCode=0 Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.457856 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" event={"ID":"aa3d6f14-7344-4e2c-a932-af286e9861b2","Type":"ContainerDied","Data":"cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8"} Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.457868 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.457886 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-rrptn" event={"ID":"aa3d6f14-7344-4e2c-a932-af286e9861b2","Type":"ContainerDied","Data":"16a0dfe20a9dcf73b6ed3d0d406f3cd6b1000fb9c43f611d410d801639997ac4"} Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.457908 5065 scope.go:117] "RemoveContainer" containerID="cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.462076 5065 generic.go:334] "Generic (PLEG): container finished" podID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerID="ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b" exitCode=0 Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.462106 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a55db36a-c50d-4e84-b228-0c3d0b7d5578","Type":"ContainerDied","Data":"ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b"} Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.462124 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.462127 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a55db36a-c50d-4e84-b228-0c3d0b7d5578","Type":"ContainerDied","Data":"94dda64939c187dea4f9eb256287d86175ad1b6355019468cfde69d3b4f5fe59"} Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.475737 5065 scope.go:117] "RemoveContainer" containerID="40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.478576 5065 reconciler_common.go:293] "Volume detached for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.490559 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-rrptn"] Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.496059 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-rrptn"] Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.504955 5065 scope.go:117] "RemoveContainer" containerID="cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8" Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.505574 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8\": container with ID starting with cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8 not found: ID does not exist" containerID="cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.505617 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8"} err="failed to get container status \"cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8\": rpc error: code = NotFound desc = could not find container \"cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8\": container with ID starting with cbbd50030514967eb2b6ebf1e27967d02ce2abe9b93b546ae175ce3328d29ba8 not found: ID does not exist" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.505646 5065 scope.go:117] "RemoveContainer" containerID="40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.505956 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.505989 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2\": container with ID starting with 40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2 not found: ID does not exist" containerID="40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.506008 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2"} err="failed to get container status \"40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2\": rpc error: code = NotFound desc = could not find container \"40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2\": container with ID starting with 40b50bbdc0aa92b243c8dbd858d16e0450938f2306f7a28e4829b365040473b2 not found: ID does not exist" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.506021 5065 scope.go:117] "RemoveContainer" containerID="ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.510717 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.528542 5065 scope.go:117] "RemoveContainer" containerID="ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.530839 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.531113 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerName="setup-container" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.531128 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerName="setup-container" Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.531142 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerName="dnsmasq-dns" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.531148 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerName="dnsmasq-dns" Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.531172 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerName="init" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.531178 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerName="init" Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.531190 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerName="rabbitmq" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.531195 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerName="rabbitmq" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.531314 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" containerName="rabbitmq" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.531346 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3d6f14-7344-4e2c-a932-af286e9861b2" containerName="dnsmasq-dns" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.532082 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.534321 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.537420 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.538370 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.538560 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.540041 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.540210 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k2fgx" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.540368 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.550801 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.569640 5065 scope.go:117] "RemoveContainer" containerID="ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b" Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.570303 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b\": container with ID starting with ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b not found: ID does not exist" containerID="ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.570354 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b"} err="failed to get container status \"ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b\": rpc error: code = NotFound desc = could not find container \"ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b\": container with ID starting with ba84682225559cfa4701d6a7a3fc5db860d080b569f9b7b0690e8fc695f4f87b not found: ID does not exist" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.570387 5065 scope.go:117] "RemoveContainer" containerID="ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc" Oct 08 14:41:25 crc kubenswrapper[5065]: E1008 14:41:25.571246 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc\": container with ID starting with ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc not found: ID does not exist" containerID="ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.571277 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc"} err="failed to get container status \"ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc\": rpc error: code = NotFound desc = could not find container \"ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc\": container with ID starting with ac949f0902274dc55278642fdba73802702fda1cb85e68297901e3a593c1e0cc not found: ID does not exist" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.680599 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.680652 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.680681 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.680722 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.680870 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-config-data\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.681036 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abbae3c9-79f9-4195-8988-eb1137bfa8ee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.681074 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.681109 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.681134 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.681224 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abbae3c9-79f9-4195-8988-eb1137bfa8ee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.681294 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lpf\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-kube-api-access-77lpf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.784785 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abbae3c9-79f9-4195-8988-eb1137bfa8ee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785050 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785111 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785147 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785224 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abbae3c9-79f9-4195-8988-eb1137bfa8ee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785308 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lpf\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-kube-api-access-77lpf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785389 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785417 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.785932 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.786296 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.786361 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.786697 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.786789 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.786835 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-config-data\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.788855 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abbae3c9-79f9-4195-8988-eb1137bfa8ee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.789097 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-config-data\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.789304 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abbae3c9-79f9-4195-8988-eb1137bfa8ee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.790336 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abbae3c9-79f9-4195-8988-eb1137bfa8ee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.791125 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.791156 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.791195 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3e5e10768210a27f1f3a9f9d62e6c95645c1992a8b16917bab76178a8b75825/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.794963 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.812182 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lpf\" (UniqueName: \"kubernetes.io/projected/abbae3c9-79f9-4195-8988-eb1137bfa8ee-kube-api-access-77lpf\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.840946 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e0b9e43-921e-4a63-8037-4105da53ad34\") pod \"rabbitmq-server-0\" (UID: \"abbae3c9-79f9-4195-8988-eb1137bfa8ee\") " pod="openstack/rabbitmq-server-0" Oct 08 14:41:25 crc kubenswrapper[5065]: I1008 14:41:25.871950 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.011968 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.124009 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.190923 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-plugins-conf\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191327 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-confd\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191369 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7sht\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-kube-api-access-b7sht\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191391 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85ce3937-b175-4380-8a3d-f24a319e67e0-pod-info\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191450 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-tls\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191472 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-server-conf\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191489 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85ce3937-b175-4380-8a3d-f24a319e67e0-erlang-cookie-secret\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191520 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-plugins\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191583 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-config-data\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191637 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-erlang-cookie\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.191767 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"85ce3937-b175-4380-8a3d-f24a319e67e0\" (UID: \"85ce3937-b175-4380-8a3d-f24a319e67e0\") " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.192335 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.192647 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.193601 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.196368 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-kube-api-access-b7sht" (OuterVolumeSpecName: "kube-api-access-b7sht") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "kube-api-access-b7sht". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.197244 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.197240 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ce3937-b175-4380-8a3d-f24a319e67e0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.200325 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/85ce3937-b175-4380-8a3d-f24a319e67e0-pod-info" (OuterVolumeSpecName: "pod-info") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.217650 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954" (OuterVolumeSpecName: "persistence") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "pvc-49e8eaf2-6217-418c-ad6c-294176d90954". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.232250 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-config-data" (OuterVolumeSpecName: "config-data") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.247699 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-server-conf" (OuterVolumeSpecName: "server-conf") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.282796 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "85ce3937-b175-4380-8a3d-f24a319e67e0" (UID: "85ce3937-b175-4380-8a3d-f24a319e67e0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293342 5065 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") on node \"crc\" " Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293376 5065 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293387 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293396 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7sht\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-kube-api-access-b7sht\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293409 5065 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85ce3937-b175-4380-8a3d-f24a319e67e0-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293420 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293450 5065 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293463 5065 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85ce3937-b175-4380-8a3d-f24a319e67e0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293473 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293483 5065 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85ce3937-b175-4380-8a3d-f24a319e67e0-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.293492 5065 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85ce3937-b175-4380-8a3d-f24a319e67e0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.309260 5065 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.309401 5065 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-49e8eaf2-6217-418c-ad6c-294176d90954" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954") on node "crc" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.394573 5065 reconciler_common.go:293] "Volume detached for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.471085 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abbae3c9-79f9-4195-8988-eb1137bfa8ee","Type":"ContainerStarted","Data":"c27cadbeed9560290663a209d04db7df75e387a43e2f77540780c0fc53d39ca5"} Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.474897 5065 generic.go:334] "Generic (PLEG): container finished" podID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerID="48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2" exitCode=0 Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.474958 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.474990 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85ce3937-b175-4380-8a3d-f24a319e67e0","Type":"ContainerDied","Data":"48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2"} Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.475026 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85ce3937-b175-4380-8a3d-f24a319e67e0","Type":"ContainerDied","Data":"28d125bf59c6cf2ed4999fa7f15c74077dc344788dc01bf248b1c6414e437385"} Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.475044 5065 scope.go:117] "RemoveContainer" containerID="48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.510694 5065 scope.go:117] "RemoveContainer" containerID="6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.518964 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.523981 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.546380 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:41:26 crc kubenswrapper[5065]: E1008 14:41:26.546759 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerName="rabbitmq" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.546775 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerName="rabbitmq" Oct 08 14:41:26 crc kubenswrapper[5065]: E1008 14:41:26.546808 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerName="setup-container" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.546815 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerName="setup-container" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.546979 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" containerName="rabbitmq" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.548509 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.550208 5065 scope.go:117] "RemoveContainer" containerID="48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.553167 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4npwx" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.553472 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 14:41:26 crc kubenswrapper[5065]: E1008 14:41:26.553492 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2\": container with ID starting with 48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2 not found: ID does not exist" containerID="48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.553537 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2"} err="failed to get container status \"48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2\": rpc error: code = NotFound desc = could not find container \"48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2\": container with ID starting with 48b7ec502bafdc57e13a28a21497f33968873808e2bfa87d8a8ac27aa97b9ce2 not found: ID does not exist" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.553572 5065 scope.go:117] "RemoveContainer" containerID="6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d" Oct 08 14:41:26 crc kubenswrapper[5065]: E1008 14:41:26.554020 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d\": container with ID starting with 6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d not found: ID does not exist" containerID="6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.554049 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d"} err="failed to get container status \"6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d\": rpc error: code = NotFound desc = could not find container \"6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d\": container with ID starting with 6fe64fd4856ff0b095b8af90a9f6a01efdb3f18d8f4a973f861380eeda93597d not found: ID does not exist" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.554956 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.555183 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.555295 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.555451 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.555653 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.581459 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700023 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58778956-908f-40b7-921d-920c247222e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700083 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700118 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqsq\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-kube-api-access-znqsq\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700151 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700174 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58778956-908f-40b7-921d-920c247222e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700355 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700385 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700413 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58778956-908f-40b7-921d-920c247222e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700512 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700559 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58778956-908f-40b7-921d-920c247222e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.700593 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801598 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801650 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58778956-908f-40b7-921d-920c247222e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801721 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801750 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801775 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58778956-908f-40b7-921d-920c247222e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801806 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801829 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58778956-908f-40b7-921d-920c247222e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801863 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801926 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58778956-908f-40b7-921d-920c247222e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801954 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.801979 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqsq\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-kube-api-access-znqsq\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.802624 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/58778956-908f-40b7-921d-920c247222e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.802751 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/58778956-908f-40b7-921d-920c247222e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.803654 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.804100 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.805102 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/58778956-908f-40b7-921d-920c247222e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.805739 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/58778956-908f-40b7-921d-920c247222e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.805791 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/58778956-908f-40b7-921d-920c247222e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.806863 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.806871 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.806931 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/56fcdab089bca57706b366626fa0bf961f6dc474d430607fcd9bc885644ea46a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.807071 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.818869 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqsq\" (UniqueName: \"kubernetes.io/projected/58778956-908f-40b7-921d-920c247222e7-kube-api-access-znqsq\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.834533 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49e8eaf2-6217-418c-ad6c-294176d90954\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49e8eaf2-6217-418c-ad6c-294176d90954\") pod \"rabbitmq-cell1-server-0\" (UID: \"58778956-908f-40b7-921d-920c247222e7\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.885251 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ce3937-b175-4380-8a3d-f24a319e67e0" path="/var/lib/kubelet/pods/85ce3937-b175-4380-8a3d-f24a319e67e0/volumes" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.885291 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.887641 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55db36a-c50d-4e84-b228-0c3d0b7d5578" path="/var/lib/kubelet/pods/a55db36a-c50d-4e84-b228-0c3d0b7d5578/volumes" Oct 08 14:41:26 crc kubenswrapper[5065]: I1008 14:41:26.888488 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3d6f14-7344-4e2c-a932-af286e9861b2" path="/var/lib/kubelet/pods/aa3d6f14-7344-4e2c-a932-af286e9861b2/volumes" Oct 08 14:41:27 crc kubenswrapper[5065]: I1008 14:41:27.454237 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:41:27 crc kubenswrapper[5065]: W1008 14:41:27.457323 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58778956_908f_40b7_921d_920c247222e7.slice/crio-fdd1ccf9f59d2cef5fe80bea433e82d72da1f69ee090d1ec6e8fcef5899c1bbe WatchSource:0}: Error finding container fdd1ccf9f59d2cef5fe80bea433e82d72da1f69ee090d1ec6e8fcef5899c1bbe: Status 404 returned error can't find the container with id fdd1ccf9f59d2cef5fe80bea433e82d72da1f69ee090d1ec6e8fcef5899c1bbe Oct 08 14:41:27 crc kubenswrapper[5065]: I1008 14:41:27.488371 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58778956-908f-40b7-921d-920c247222e7","Type":"ContainerStarted","Data":"fdd1ccf9f59d2cef5fe80bea433e82d72da1f69ee090d1ec6e8fcef5899c1bbe"} Oct 08 14:41:27 crc kubenswrapper[5065]: I1008 14:41:27.492081 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abbae3c9-79f9-4195-8988-eb1137bfa8ee","Type":"ContainerStarted","Data":"8e09c381f191a95a797c17c20f9644973ab25c85e6468f2edd42ce85a2716639"} Oct 08 14:41:29 crc kubenswrapper[5065]: I1008 14:41:29.514218 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58778956-908f-40b7-921d-920c247222e7","Type":"ContainerStarted","Data":"3e3e789ef1045e4c5724090ffd43c770003dc7dbd8216b4cbd5a6be5406a1d86"} Oct 08 14:41:31 crc kubenswrapper[5065]: I1008 14:41:31.873313 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:41:31 crc kubenswrapper[5065]: E1008 14:41:31.873841 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:41:46 crc kubenswrapper[5065]: I1008 14:41:46.873686 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:41:46 crc kubenswrapper[5065]: E1008 14:41:46.874498 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:41:59 crc kubenswrapper[5065]: I1008 14:41:59.814519 5065 generic.go:334] "Generic (PLEG): container finished" podID="abbae3c9-79f9-4195-8988-eb1137bfa8ee" containerID="8e09c381f191a95a797c17c20f9644973ab25c85e6468f2edd42ce85a2716639" exitCode=0 Oct 08 14:41:59 crc kubenswrapper[5065]: I1008 14:41:59.815104 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abbae3c9-79f9-4195-8988-eb1137bfa8ee","Type":"ContainerDied","Data":"8e09c381f191a95a797c17c20f9644973ab25c85e6468f2edd42ce85a2716639"} Oct 08 14:42:00 crc kubenswrapper[5065]: I1008 14:42:00.828693 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abbae3c9-79f9-4195-8988-eb1137bfa8ee","Type":"ContainerStarted","Data":"611e531c19b2a575f15960611434f44fbb4d9a09ab2b95060f7ff0ac3b611e54"} Oct 08 14:42:00 crc kubenswrapper[5065]: I1008 14:42:00.830087 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 14:42:00 crc kubenswrapper[5065]: I1008 14:42:00.863600 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.86358019 podStartE2EDuration="35.86358019s" podCreationTimestamp="2025-10-08 14:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:00.86064349 +0000 UTC m=+5022.638025267" watchObservedRunningTime="2025-10-08 14:42:00.86358019 +0000 UTC m=+5022.640961947" Oct 08 14:42:00 crc kubenswrapper[5065]: I1008 14:42:00.873329 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:42:00 crc kubenswrapper[5065]: E1008 14:42:00.873823 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:42:01 crc kubenswrapper[5065]: I1008 14:42:01.841641 5065 generic.go:334] "Generic (PLEG): container finished" podID="58778956-908f-40b7-921d-920c247222e7" containerID="3e3e789ef1045e4c5724090ffd43c770003dc7dbd8216b4cbd5a6be5406a1d86" exitCode=0 Oct 08 14:42:01 crc kubenswrapper[5065]: I1008 14:42:01.841664 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58778956-908f-40b7-921d-920c247222e7","Type":"ContainerDied","Data":"3e3e789ef1045e4c5724090ffd43c770003dc7dbd8216b4cbd5a6be5406a1d86"} Oct 08 14:42:02 crc kubenswrapper[5065]: I1008 14:42:02.851898 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"58778956-908f-40b7-921d-920c247222e7","Type":"ContainerStarted","Data":"b9ba0d2d77fcf0a27858ec7337a0c97b1477bb6452b3cad2a7c8a73e0b436be7"} Oct 08 14:42:02 crc kubenswrapper[5065]: I1008 14:42:02.852582 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:42:02 crc kubenswrapper[5065]: I1008 14:42:02.881170 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.881141126 podStartE2EDuration="36.881141126s" podCreationTimestamp="2025-10-08 14:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:02.872791249 +0000 UTC m=+5024.650173046" watchObservedRunningTime="2025-10-08 14:42:02.881141126 +0000 UTC m=+5024.658522923" Oct 08 14:42:12 crc kubenswrapper[5065]: I1008 14:42:12.874049 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:42:12 crc kubenswrapper[5065]: E1008 14:42:12.874751 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:42:15 crc kubenswrapper[5065]: I1008 14:42:15.875761 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 14:42:16 crc kubenswrapper[5065]: I1008 14:42:16.889795 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:42:20 crc kubenswrapper[5065]: I1008 14:42:20.941912 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 14:42:20 crc kubenswrapper[5065]: I1008 14:42:20.943449 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 14:42:20 crc kubenswrapper[5065]: I1008 14:42:20.947930 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wwbk7" Oct 08 14:42:20 crc kubenswrapper[5065]: I1008 14:42:20.952685 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 14:42:21 crc kubenswrapper[5065]: I1008 14:42:21.118754 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgxwh\" (UniqueName: \"kubernetes.io/projected/a755bc70-2e52-4ec3-be5c-6d0f1ec40d40-kube-api-access-fgxwh\") pod \"mariadb-client-1-default\" (UID: \"a755bc70-2e52-4ec3-be5c-6d0f1ec40d40\") " pod="openstack/mariadb-client-1-default" Oct 08 14:42:21 crc kubenswrapper[5065]: I1008 14:42:21.220373 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgxwh\" (UniqueName: \"kubernetes.io/projected/a755bc70-2e52-4ec3-be5c-6d0f1ec40d40-kube-api-access-fgxwh\") pod \"mariadb-client-1-default\" (UID: \"a755bc70-2e52-4ec3-be5c-6d0f1ec40d40\") " pod="openstack/mariadb-client-1-default" Oct 08 14:42:21 crc kubenswrapper[5065]: I1008 14:42:21.256285 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgxwh\" (UniqueName: \"kubernetes.io/projected/a755bc70-2e52-4ec3-be5c-6d0f1ec40d40-kube-api-access-fgxwh\") pod \"mariadb-client-1-default\" (UID: \"a755bc70-2e52-4ec3-be5c-6d0f1ec40d40\") " pod="openstack/mariadb-client-1-default" Oct 08 14:42:21 crc kubenswrapper[5065]: I1008 14:42:21.262077 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 14:42:21 crc kubenswrapper[5065]: I1008 14:42:21.767390 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 14:42:21 crc kubenswrapper[5065]: W1008 14:42:21.771654 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda755bc70_2e52_4ec3_be5c_6d0f1ec40d40.slice/crio-341391fe66d370ba0d51c04fa629b6d784c5d8e3a342ab01101913cbb2f8cdc7 WatchSource:0}: Error finding container 341391fe66d370ba0d51c04fa629b6d784c5d8e3a342ab01101913cbb2f8cdc7: Status 404 returned error can't find the container with id 341391fe66d370ba0d51c04fa629b6d784c5d8e3a342ab01101913cbb2f8cdc7 Oct 08 14:42:21 crc kubenswrapper[5065]: I1008 14:42:21.774319 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:42:22 crc kubenswrapper[5065]: I1008 14:42:22.047177 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"a755bc70-2e52-4ec3-be5c-6d0f1ec40d40","Type":"ContainerStarted","Data":"341391fe66d370ba0d51c04fa629b6d784c5d8e3a342ab01101913cbb2f8cdc7"} Oct 08 14:42:23 crc kubenswrapper[5065]: I1008 14:42:23.070332 5065 generic.go:334] "Generic (PLEG): container finished" podID="a755bc70-2e52-4ec3-be5c-6d0f1ec40d40" containerID="75e10069d19d2185e8d20df62cfa13f125d011d08d136ab1736a0c0dbbea9cae" exitCode=0 Oct 08 14:42:23 crc kubenswrapper[5065]: I1008 14:42:23.070586 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"a755bc70-2e52-4ec3-be5c-6d0f1ec40d40","Type":"ContainerDied","Data":"75e10069d19d2185e8d20df62cfa13f125d011d08d136ab1736a0c0dbbea9cae"} Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.501274 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.526397 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_a755bc70-2e52-4ec3-be5c-6d0f1ec40d40/mariadb-client-1-default/0.log" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.559109 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.569219 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.582599 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgxwh\" (UniqueName: \"kubernetes.io/projected/a755bc70-2e52-4ec3-be5c-6d0f1ec40d40-kube-api-access-fgxwh\") pod \"a755bc70-2e52-4ec3-be5c-6d0f1ec40d40\" (UID: \"a755bc70-2e52-4ec3-be5c-6d0f1ec40d40\") " Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.588608 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a755bc70-2e52-4ec3-be5c-6d0f1ec40d40-kube-api-access-fgxwh" (OuterVolumeSpecName: "kube-api-access-fgxwh") pod "a755bc70-2e52-4ec3-be5c-6d0f1ec40d40" (UID: "a755bc70-2e52-4ec3-be5c-6d0f1ec40d40"). InnerVolumeSpecName "kube-api-access-fgxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.684682 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgxwh\" (UniqueName: \"kubernetes.io/projected/a755bc70-2e52-4ec3-be5c-6d0f1ec40d40-kube-api-access-fgxwh\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.873691 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:42:24 crc kubenswrapper[5065]: E1008 14:42:24.874363 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.884786 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a755bc70-2e52-4ec3-be5c-6d0f1ec40d40" path="/var/lib/kubelet/pods/a755bc70-2e52-4ec3-be5c-6d0f1ec40d40/volumes" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.986907 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 14:42:24 crc kubenswrapper[5065]: E1008 14:42:24.987733 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a755bc70-2e52-4ec3-be5c-6d0f1ec40d40" containerName="mariadb-client-1-default" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.987934 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="a755bc70-2e52-4ec3-be5c-6d0f1ec40d40" containerName="mariadb-client-1-default" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.988531 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="a755bc70-2e52-4ec3-be5c-6d0f1ec40d40" containerName="mariadb-client-1-default" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.989828 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 14:42:24 crc kubenswrapper[5065]: I1008 14:42:24.995579 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 14:42:25 crc kubenswrapper[5065]: I1008 14:42:25.091323 5065 scope.go:117] "RemoveContainer" containerID="75e10069d19d2185e8d20df62cfa13f125d011d08d136ab1736a0c0dbbea9cae" Oct 08 14:42:25 crc kubenswrapper[5065]: I1008 14:42:25.091376 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 14:42:25 crc kubenswrapper[5065]: I1008 14:42:25.092490 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf5k8\" (UniqueName: \"kubernetes.io/projected/9fe501c8-f5eb-4d01-acbd-b8e8e78290b6-kube-api-access-cf5k8\") pod \"mariadb-client-2-default\" (UID: \"9fe501c8-f5eb-4d01-acbd-b8e8e78290b6\") " pod="openstack/mariadb-client-2-default" Oct 08 14:42:25 crc kubenswrapper[5065]: I1008 14:42:25.193896 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf5k8\" (UniqueName: \"kubernetes.io/projected/9fe501c8-f5eb-4d01-acbd-b8e8e78290b6-kube-api-access-cf5k8\") pod \"mariadb-client-2-default\" (UID: \"9fe501c8-f5eb-4d01-acbd-b8e8e78290b6\") " pod="openstack/mariadb-client-2-default" Oct 08 14:42:25 crc kubenswrapper[5065]: I1008 14:42:25.223712 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf5k8\" (UniqueName: \"kubernetes.io/projected/9fe501c8-f5eb-4d01-acbd-b8e8e78290b6-kube-api-access-cf5k8\") pod \"mariadb-client-2-default\" (UID: \"9fe501c8-f5eb-4d01-acbd-b8e8e78290b6\") " pod="openstack/mariadb-client-2-default" Oct 08 14:42:25 crc kubenswrapper[5065]: I1008 14:42:25.322589 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 14:42:25 crc kubenswrapper[5065]: I1008 14:42:25.920061 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 14:42:25 crc kubenswrapper[5065]: W1008 14:42:25.927650 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fe501c8_f5eb_4d01_acbd_b8e8e78290b6.slice/crio-adafaf6d5057591062f6dc7a9a8d8fb114c7c350af01afb2c0ee90ea555d07a2 WatchSource:0}: Error finding container adafaf6d5057591062f6dc7a9a8d8fb114c7c350af01afb2c0ee90ea555d07a2: Status 404 returned error can't find the container with id adafaf6d5057591062f6dc7a9a8d8fb114c7c350af01afb2c0ee90ea555d07a2 Oct 08 14:42:26 crc kubenswrapper[5065]: I1008 14:42:26.101961 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"9fe501c8-f5eb-4d01-acbd-b8e8e78290b6","Type":"ContainerStarted","Data":"adafaf6d5057591062f6dc7a9a8d8fb114c7c350af01afb2c0ee90ea555d07a2"} Oct 08 14:42:27 crc kubenswrapper[5065]: I1008 14:42:27.116619 5065 generic.go:334] "Generic (PLEG): container finished" podID="9fe501c8-f5eb-4d01-acbd-b8e8e78290b6" containerID="c8b3f6f56add9afe0977f65b93b09bc1ae27e717b44bca456695d4c596bc7231" exitCode=1 Oct 08 14:42:27 crc kubenswrapper[5065]: I1008 14:42:27.116737 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"9fe501c8-f5eb-4d01-acbd-b8e8e78290b6","Type":"ContainerDied","Data":"c8b3f6f56add9afe0977f65b93b09bc1ae27e717b44bca456695d4c596bc7231"} Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.590230 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.610663 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_9fe501c8-f5eb-4d01-acbd-b8e8e78290b6/mariadb-client-2-default/0.log" Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.636700 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.642012 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.647189 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf5k8\" (UniqueName: \"kubernetes.io/projected/9fe501c8-f5eb-4d01-acbd-b8e8e78290b6-kube-api-access-cf5k8\") pod \"9fe501c8-f5eb-4d01-acbd-b8e8e78290b6\" (UID: \"9fe501c8-f5eb-4d01-acbd-b8e8e78290b6\") " Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.652237 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe501c8-f5eb-4d01-acbd-b8e8e78290b6-kube-api-access-cf5k8" (OuterVolumeSpecName: "kube-api-access-cf5k8") pod "9fe501c8-f5eb-4d01-acbd-b8e8e78290b6" (UID: "9fe501c8-f5eb-4d01-acbd-b8e8e78290b6"). InnerVolumeSpecName "kube-api-access-cf5k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.749808 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf5k8\" (UniqueName: \"kubernetes.io/projected/9fe501c8-f5eb-4d01-acbd-b8e8e78290b6-kube-api-access-cf5k8\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:28 crc kubenswrapper[5065]: I1008 14:42:28.884615 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe501c8-f5eb-4d01-acbd-b8e8e78290b6" path="/var/lib/kubelet/pods/9fe501c8-f5eb-4d01-acbd-b8e8e78290b6/volumes" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.069406 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Oct 08 14:42:29 crc kubenswrapper[5065]: E1008 14:42:29.070031 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe501c8-f5eb-4d01-acbd-b8e8e78290b6" containerName="mariadb-client-2-default" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.070075 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe501c8-f5eb-4d01-acbd-b8e8e78290b6" containerName="mariadb-client-2-default" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.070538 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe501c8-f5eb-4d01-acbd-b8e8e78290b6" containerName="mariadb-client-2-default" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.071684 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.089833 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.134353 5065 scope.go:117] "RemoveContainer" containerID="c8b3f6f56add9afe0977f65b93b09bc1ae27e717b44bca456695d4c596bc7231" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.134383 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.157530 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ztk4\" (UniqueName: \"kubernetes.io/projected/62986a34-0f89-4b67-9c56-8a26b4d7ce45-kube-api-access-4ztk4\") pod \"mariadb-client-1\" (UID: \"62986a34-0f89-4b67-9c56-8a26b4d7ce45\") " pod="openstack/mariadb-client-1" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.259080 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ztk4\" (UniqueName: \"kubernetes.io/projected/62986a34-0f89-4b67-9c56-8a26b4d7ce45-kube-api-access-4ztk4\") pod \"mariadb-client-1\" (UID: \"62986a34-0f89-4b67-9c56-8a26b4d7ce45\") " pod="openstack/mariadb-client-1" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.278439 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ztk4\" (UniqueName: \"kubernetes.io/projected/62986a34-0f89-4b67-9c56-8a26b4d7ce45-kube-api-access-4ztk4\") pod \"mariadb-client-1\" (UID: \"62986a34-0f89-4b67-9c56-8a26b4d7ce45\") " pod="openstack/mariadb-client-1" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.400764 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 14:42:29 crc kubenswrapper[5065]: I1008 14:42:29.930098 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 14:42:30 crc kubenswrapper[5065]: I1008 14:42:30.144258 5065 generic.go:334] "Generic (PLEG): container finished" podID="62986a34-0f89-4b67-9c56-8a26b4d7ce45" containerID="1f1eff6ef60a9b385d4f9040f27c01b82b8e4e6428c6faa8f5190c6a06d36975" exitCode=0 Oct 08 14:42:30 crc kubenswrapper[5065]: I1008 14:42:30.144492 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"62986a34-0f89-4b67-9c56-8a26b4d7ce45","Type":"ContainerDied","Data":"1f1eff6ef60a9b385d4f9040f27c01b82b8e4e6428c6faa8f5190c6a06d36975"} Oct 08 14:42:30 crc kubenswrapper[5065]: I1008 14:42:30.144610 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"62986a34-0f89-4b67-9c56-8a26b4d7ce45","Type":"ContainerStarted","Data":"74d04962d01a91fc96df143aa812654ba9ce19c55756231612baa4dacbb94f16"} Oct 08 14:42:31 crc kubenswrapper[5065]: I1008 14:42:31.527528 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 14:42:31 crc kubenswrapper[5065]: I1008 14:42:31.581510 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_62986a34-0f89-4b67-9c56-8a26b4d7ce45/mariadb-client-1/0.log" Oct 08 14:42:31 crc kubenswrapper[5065]: I1008 14:42:31.597953 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ztk4\" (UniqueName: \"kubernetes.io/projected/62986a34-0f89-4b67-9c56-8a26b4d7ce45-kube-api-access-4ztk4\") pod \"62986a34-0f89-4b67-9c56-8a26b4d7ce45\" (UID: \"62986a34-0f89-4b67-9c56-8a26b4d7ce45\") " Oct 08 14:42:31 crc kubenswrapper[5065]: I1008 14:42:31.612918 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62986a34-0f89-4b67-9c56-8a26b4d7ce45-kube-api-access-4ztk4" (OuterVolumeSpecName: "kube-api-access-4ztk4") pod "62986a34-0f89-4b67-9c56-8a26b4d7ce45" (UID: "62986a34-0f89-4b67-9c56-8a26b4d7ce45"). InnerVolumeSpecName "kube-api-access-4ztk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:31 crc kubenswrapper[5065]: I1008 14:42:31.617313 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 14:42:31 crc kubenswrapper[5065]: I1008 14:42:31.629016 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 14:42:31 crc kubenswrapper[5065]: I1008 14:42:31.700875 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ztk4\" (UniqueName: \"kubernetes.io/projected/62986a34-0f89-4b67-9c56-8a26b4d7ce45-kube-api-access-4ztk4\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.024213 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 14:42:32 crc kubenswrapper[5065]: E1008 14:42:32.024717 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62986a34-0f89-4b67-9c56-8a26b4d7ce45" containerName="mariadb-client-1" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.024747 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="62986a34-0f89-4b67-9c56-8a26b4d7ce45" containerName="mariadb-client-1" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.025060 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="62986a34-0f89-4b67-9c56-8a26b4d7ce45" containerName="mariadb-client-1" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.026151 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.031820 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.110198 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sbc\" (UniqueName: \"kubernetes.io/projected/e3363380-7d07-4b38-9408-6560c6044c90-kube-api-access-d2sbc\") pod \"mariadb-client-4-default\" (UID: \"e3363380-7d07-4b38-9408-6560c6044c90\") " pod="openstack/mariadb-client-4-default" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.163375 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d04962d01a91fc96df143aa812654ba9ce19c55756231612baa4dacbb94f16" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.163530 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.211941 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sbc\" (UniqueName: \"kubernetes.io/projected/e3363380-7d07-4b38-9408-6560c6044c90-kube-api-access-d2sbc\") pod \"mariadb-client-4-default\" (UID: \"e3363380-7d07-4b38-9408-6560c6044c90\") " pod="openstack/mariadb-client-4-default" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.240222 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sbc\" (UniqueName: \"kubernetes.io/projected/e3363380-7d07-4b38-9408-6560c6044c90-kube-api-access-d2sbc\") pod \"mariadb-client-4-default\" (UID: \"e3363380-7d07-4b38-9408-6560c6044c90\") " pod="openstack/mariadb-client-4-default" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.352370 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.886402 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62986a34-0f89-4b67-9c56-8a26b4d7ce45" path="/var/lib/kubelet/pods/62986a34-0f89-4b67-9c56-8a26b4d7ce45/volumes" Oct 08 14:42:32 crc kubenswrapper[5065]: I1008 14:42:32.920494 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 14:42:32 crc kubenswrapper[5065]: W1008 14:42:32.931596 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3363380_7d07_4b38_9408_6560c6044c90.slice/crio-798e842d74187881c671b14c859ef7daef1a7bafaafd9473b1d2b8eeea84b2d0 WatchSource:0}: Error finding container 798e842d74187881c671b14c859ef7daef1a7bafaafd9473b1d2b8eeea84b2d0: Status 404 returned error can't find the container with id 798e842d74187881c671b14c859ef7daef1a7bafaafd9473b1d2b8eeea84b2d0 Oct 08 14:42:33 crc kubenswrapper[5065]: I1008 14:42:33.173459 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"e3363380-7d07-4b38-9408-6560c6044c90","Type":"ContainerStarted","Data":"da8d01ac05a54e79a180c04198fff0d901650dce879ab0dd7687512c643b88bd"} Oct 08 14:42:33 crc kubenswrapper[5065]: I1008 14:42:33.173756 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"e3363380-7d07-4b38-9408-6560c6044c90","Type":"ContainerStarted","Data":"798e842d74187881c671b14c859ef7daef1a7bafaafd9473b1d2b8eeea84b2d0"} Oct 08 14:42:33 crc kubenswrapper[5065]: I1008 14:42:33.187844 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-4-default" podStartSLOduration=2.187818479 podStartE2EDuration="2.187818479s" podCreationTimestamp="2025-10-08 14:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:33.185470035 +0000 UTC m=+5054.962851802" watchObservedRunningTime="2025-10-08 14:42:33.187818479 +0000 UTC m=+5054.965200236" Oct 08 14:42:33 crc kubenswrapper[5065]: I1008 14:42:33.226948 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_e3363380-7d07-4b38-9408-6560c6044c90/mariadb-client-4-default/0.log" Oct 08 14:42:34 crc kubenswrapper[5065]: I1008 14:42:34.184646 5065 generic.go:334] "Generic (PLEG): container finished" podID="e3363380-7d07-4b38-9408-6560c6044c90" containerID="da8d01ac05a54e79a180c04198fff0d901650dce879ab0dd7687512c643b88bd" exitCode=0 Oct 08 14:42:34 crc kubenswrapper[5065]: I1008 14:42:34.184708 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"e3363380-7d07-4b38-9408-6560c6044c90","Type":"ContainerDied","Data":"da8d01ac05a54e79a180c04198fff0d901650dce879ab0dd7687512c643b88bd"} Oct 08 14:42:35 crc kubenswrapper[5065]: I1008 14:42:35.598747 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 14:42:35 crc kubenswrapper[5065]: I1008 14:42:35.634508 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 14:42:35 crc kubenswrapper[5065]: I1008 14:42:35.639200 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 14:42:35 crc kubenswrapper[5065]: I1008 14:42:35.672583 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2sbc\" (UniqueName: \"kubernetes.io/projected/e3363380-7d07-4b38-9408-6560c6044c90-kube-api-access-d2sbc\") pod \"e3363380-7d07-4b38-9408-6560c6044c90\" (UID: \"e3363380-7d07-4b38-9408-6560c6044c90\") " Oct 08 14:42:35 crc kubenswrapper[5065]: I1008 14:42:35.677089 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3363380-7d07-4b38-9408-6560c6044c90-kube-api-access-d2sbc" (OuterVolumeSpecName: "kube-api-access-d2sbc") pod "e3363380-7d07-4b38-9408-6560c6044c90" (UID: "e3363380-7d07-4b38-9408-6560c6044c90"). InnerVolumeSpecName "kube-api-access-d2sbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:35 crc kubenswrapper[5065]: I1008 14:42:35.774606 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2sbc\" (UniqueName: \"kubernetes.io/projected/e3363380-7d07-4b38-9408-6560c6044c90-kube-api-access-d2sbc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:36 crc kubenswrapper[5065]: I1008 14:42:36.208156 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 14:42:36 crc kubenswrapper[5065]: I1008 14:42:36.208174 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798e842d74187881c671b14c859ef7daef1a7bafaafd9473b1d2b8eeea84b2d0" Oct 08 14:42:36 crc kubenswrapper[5065]: E1008 14:42:36.301341 5065 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3363380_7d07_4b38_9408_6560c6044c90.slice\": RecentStats: unable to find data in memory cache]" Oct 08 14:42:36 crc kubenswrapper[5065]: I1008 14:42:36.890411 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3363380-7d07-4b38-9408-6560c6044c90" path="/var/lib/kubelet/pods/e3363380-7d07-4b38-9408-6560c6044c90/volumes" Oct 08 14:42:37 crc kubenswrapper[5065]: I1008 14:42:37.874804 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:42:37 crc kubenswrapper[5065]: E1008 14:42:37.875695 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.557200 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 14:42:39 crc kubenswrapper[5065]: E1008 14:42:39.558982 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3363380-7d07-4b38-9408-6560c6044c90" containerName="mariadb-client-4-default" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.559015 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3363380-7d07-4b38-9408-6560c6044c90" containerName="mariadb-client-4-default" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.559203 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3363380-7d07-4b38-9408-6560c6044c90" containerName="mariadb-client-4-default" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.560835 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.566088 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wwbk7" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.571023 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.647418 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdvc\" (UniqueName: \"kubernetes.io/projected/c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493-kube-api-access-8vdvc\") pod \"mariadb-client-5-default\" (UID: \"c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493\") " pod="openstack/mariadb-client-5-default" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.749174 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vdvc\" (UniqueName: \"kubernetes.io/projected/c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493-kube-api-access-8vdvc\") pod \"mariadb-client-5-default\" (UID: \"c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493\") " pod="openstack/mariadb-client-5-default" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.782188 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vdvc\" (UniqueName: \"kubernetes.io/projected/c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493-kube-api-access-8vdvc\") pod \"mariadb-client-5-default\" (UID: \"c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493\") " pod="openstack/mariadb-client-5-default" Oct 08 14:42:39 crc kubenswrapper[5065]: I1008 14:42:39.891817 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 14:42:40 crc kubenswrapper[5065]: I1008 14:42:40.247354 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 14:42:41 crc kubenswrapper[5065]: I1008 14:42:41.250819 5065 generic.go:334] "Generic (PLEG): container finished" podID="c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493" containerID="7ad0eae46096560712b5cac08d824b71c1fdc569a0f44d228cd6b3b6cb320fec" exitCode=0 Oct 08 14:42:41 crc kubenswrapper[5065]: I1008 14:42:41.250877 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493","Type":"ContainerDied","Data":"7ad0eae46096560712b5cac08d824b71c1fdc569a0f44d228cd6b3b6cb320fec"} Oct 08 14:42:41 crc kubenswrapper[5065]: I1008 14:42:41.251579 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493","Type":"ContainerStarted","Data":"1937f5f31b31e767f8e337e0ec6123e17da11ead590a1be09fd2d94678624d55"} Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.690642 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.710051 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493/mariadb-client-5-default/0.log" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.733070 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.737224 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.795088 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vdvc\" (UniqueName: \"kubernetes.io/projected/c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493-kube-api-access-8vdvc\") pod \"c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493\" (UID: \"c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493\") " Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.800078 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493-kube-api-access-8vdvc" (OuterVolumeSpecName: "kube-api-access-8vdvc") pod "c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493" (UID: "c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493"). InnerVolumeSpecName "kube-api-access-8vdvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.866889 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 14:42:42 crc kubenswrapper[5065]: E1008 14:42:42.867180 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493" containerName="mariadb-client-5-default" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.867191 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493" containerName="mariadb-client-5-default" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.867326 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493" containerName="mariadb-client-5-default" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.868001 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.892835 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493" path="/var/lib/kubelet/pods/c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493/volumes" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.893583 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.896808 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vdvc\" (UniqueName: \"kubernetes.io/projected/c6d0d6d7-8524-4cb2-a85c-18d5aa5c7493-kube-api-access-8vdvc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:42 crc kubenswrapper[5065]: I1008 14:42:42.999346 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjrc\" (UniqueName: \"kubernetes.io/projected/91dcd58b-176e-424e-9b12-62fcb4c2ac3d-kube-api-access-shjrc\") pod \"mariadb-client-6-default\" (UID: \"91dcd58b-176e-424e-9b12-62fcb4c2ac3d\") " pod="openstack/mariadb-client-6-default" Oct 08 14:42:43 crc kubenswrapper[5065]: I1008 14:42:43.101282 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjrc\" (UniqueName: \"kubernetes.io/projected/91dcd58b-176e-424e-9b12-62fcb4c2ac3d-kube-api-access-shjrc\") pod \"mariadb-client-6-default\" (UID: \"91dcd58b-176e-424e-9b12-62fcb4c2ac3d\") " pod="openstack/mariadb-client-6-default" Oct 08 14:42:43 crc kubenswrapper[5065]: I1008 14:42:43.125641 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjrc\" (UniqueName: \"kubernetes.io/projected/91dcd58b-176e-424e-9b12-62fcb4c2ac3d-kube-api-access-shjrc\") pod \"mariadb-client-6-default\" (UID: \"91dcd58b-176e-424e-9b12-62fcb4c2ac3d\") " pod="openstack/mariadb-client-6-default" Oct 08 14:42:43 crc kubenswrapper[5065]: I1008 14:42:43.200082 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 14:42:43 crc kubenswrapper[5065]: I1008 14:42:43.272216 5065 scope.go:117] "RemoveContainer" containerID="7ad0eae46096560712b5cac08d824b71c1fdc569a0f44d228cd6b3b6cb320fec" Oct 08 14:42:43 crc kubenswrapper[5065]: I1008 14:42:43.272241 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 14:42:43 crc kubenswrapper[5065]: I1008 14:42:43.726116 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 14:42:44 crc kubenswrapper[5065]: I1008 14:42:44.286333 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"91dcd58b-176e-424e-9b12-62fcb4c2ac3d","Type":"ContainerStarted","Data":"456cab4b46897dc6a900960b7fbbf484d498e55f8e6bc779fc8c7d111207e27a"} Oct 08 14:42:44 crc kubenswrapper[5065]: I1008 14:42:44.286759 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"91dcd58b-176e-424e-9b12-62fcb4c2ac3d","Type":"ContainerStarted","Data":"82670bee6483806b26de5e4894f7696c523eb99febe530f44dbf2c4b9bc4427d"} Oct 08 14:42:44 crc kubenswrapper[5065]: I1008 14:42:44.311606 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=2.311584196 podStartE2EDuration="2.311584196s" podCreationTimestamp="2025-10-08 14:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:44.302712765 +0000 UTC m=+5066.080094572" watchObservedRunningTime="2025-10-08 14:42:44.311584196 +0000 UTC m=+5066.088965973" Oct 08 14:42:45 crc kubenswrapper[5065]: I1008 14:42:45.300340 5065 generic.go:334] "Generic (PLEG): container finished" podID="91dcd58b-176e-424e-9b12-62fcb4c2ac3d" containerID="456cab4b46897dc6a900960b7fbbf484d498e55f8e6bc779fc8c7d111207e27a" exitCode=1 Oct 08 14:42:45 crc kubenswrapper[5065]: I1008 14:42:45.300402 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"91dcd58b-176e-424e-9b12-62fcb4c2ac3d","Type":"ContainerDied","Data":"456cab4b46897dc6a900960b7fbbf484d498e55f8e6bc779fc8c7d111207e27a"} Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.671894 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.716720 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.726653 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.766278 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shjrc\" (UniqueName: \"kubernetes.io/projected/91dcd58b-176e-424e-9b12-62fcb4c2ac3d-kube-api-access-shjrc\") pod \"91dcd58b-176e-424e-9b12-62fcb4c2ac3d\" (UID: \"91dcd58b-176e-424e-9b12-62fcb4c2ac3d\") " Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.771683 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91dcd58b-176e-424e-9b12-62fcb4c2ac3d-kube-api-access-shjrc" (OuterVolumeSpecName: "kube-api-access-shjrc") pod "91dcd58b-176e-424e-9b12-62fcb4c2ac3d" (UID: "91dcd58b-176e-424e-9b12-62fcb4c2ac3d"). InnerVolumeSpecName "kube-api-access-shjrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.854310 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 14:42:46 crc kubenswrapper[5065]: E1008 14:42:46.854837 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91dcd58b-176e-424e-9b12-62fcb4c2ac3d" containerName="mariadb-client-6-default" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.854858 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="91dcd58b-176e-424e-9b12-62fcb4c2ac3d" containerName="mariadb-client-6-default" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.855095 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="91dcd58b-176e-424e-9b12-62fcb4c2ac3d" containerName="mariadb-client-6-default" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.856001 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.864266 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.871593 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shjrc\" (UniqueName: \"kubernetes.io/projected/91dcd58b-176e-424e-9b12-62fcb4c2ac3d-kube-api-access-shjrc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.883989 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91dcd58b-176e-424e-9b12-62fcb4c2ac3d" path="/var/lib/kubelet/pods/91dcd58b-176e-424e-9b12-62fcb4c2ac3d/volumes" Oct 08 14:42:46 crc kubenswrapper[5065]: I1008 14:42:46.973084 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq4xm\" (UniqueName: \"kubernetes.io/projected/f814194e-462b-4b9f-9d6c-cc9c164b1ef4-kube-api-access-gq4xm\") pod \"mariadb-client-7-default\" (UID: \"f814194e-462b-4b9f-9d6c-cc9c164b1ef4\") " pod="openstack/mariadb-client-7-default" Oct 08 14:42:47 crc kubenswrapper[5065]: I1008 14:42:47.074314 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq4xm\" (UniqueName: \"kubernetes.io/projected/f814194e-462b-4b9f-9d6c-cc9c164b1ef4-kube-api-access-gq4xm\") pod \"mariadb-client-7-default\" (UID: \"f814194e-462b-4b9f-9d6c-cc9c164b1ef4\") " pod="openstack/mariadb-client-7-default" Oct 08 14:42:47 crc kubenswrapper[5065]: I1008 14:42:47.092777 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq4xm\" (UniqueName: \"kubernetes.io/projected/f814194e-462b-4b9f-9d6c-cc9c164b1ef4-kube-api-access-gq4xm\") pod \"mariadb-client-7-default\" (UID: \"f814194e-462b-4b9f-9d6c-cc9c164b1ef4\") " pod="openstack/mariadb-client-7-default" Oct 08 14:42:47 crc kubenswrapper[5065]: I1008 14:42:47.189892 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 14:42:47 crc kubenswrapper[5065]: I1008 14:42:47.320646 5065 scope.go:117] "RemoveContainer" containerID="456cab4b46897dc6a900960b7fbbf484d498e55f8e6bc779fc8c7d111207e27a" Oct 08 14:42:47 crc kubenswrapper[5065]: I1008 14:42:47.320783 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 14:42:47 crc kubenswrapper[5065]: I1008 14:42:47.687269 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 14:42:48 crc kubenswrapper[5065]: I1008 14:42:48.334068 5065 generic.go:334] "Generic (PLEG): container finished" podID="f814194e-462b-4b9f-9d6c-cc9c164b1ef4" containerID="c47c4d5ca4b2e677faef75fa4e39bf4953b33c59f4467d06cf34eae83aa1c74d" exitCode=1 Oct 08 14:42:48 crc kubenswrapper[5065]: I1008 14:42:48.334169 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"f814194e-462b-4b9f-9d6c-cc9c164b1ef4","Type":"ContainerDied","Data":"c47c4d5ca4b2e677faef75fa4e39bf4953b33c59f4467d06cf34eae83aa1c74d"} Oct 08 14:42:48 crc kubenswrapper[5065]: I1008 14:42:48.334454 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"f814194e-462b-4b9f-9d6c-cc9c164b1ef4","Type":"ContainerStarted","Data":"dcd0e76d473d038a1ba625da4cefce99b31b5a4437eeb4a288f49947cb057f41"} Oct 08 14:42:48 crc kubenswrapper[5065]: I1008 14:42:48.886661 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:42:48 crc kubenswrapper[5065]: E1008 14:42:48.887148 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.704410 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.729598 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_f814194e-462b-4b9f-9d6c-cc9c164b1ef4/mariadb-client-7-default/0.log" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.761517 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.768080 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.848029 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq4xm\" (UniqueName: \"kubernetes.io/projected/f814194e-462b-4b9f-9d6c-cc9c164b1ef4-kube-api-access-gq4xm\") pod \"f814194e-462b-4b9f-9d6c-cc9c164b1ef4\" (UID: \"f814194e-462b-4b9f-9d6c-cc9c164b1ef4\") " Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.855222 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f814194e-462b-4b9f-9d6c-cc9c164b1ef4-kube-api-access-gq4xm" (OuterVolumeSpecName: "kube-api-access-gq4xm") pod "f814194e-462b-4b9f-9d6c-cc9c164b1ef4" (UID: "f814194e-462b-4b9f-9d6c-cc9c164b1ef4"). InnerVolumeSpecName "kube-api-access-gq4xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.900900 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Oct 08 14:42:49 crc kubenswrapper[5065]: E1008 14:42:49.901180 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f814194e-462b-4b9f-9d6c-cc9c164b1ef4" containerName="mariadb-client-7-default" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.901191 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f814194e-462b-4b9f-9d6c-cc9c164b1ef4" containerName="mariadb-client-7-default" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.901345 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f814194e-462b-4b9f-9d6c-cc9c164b1ef4" containerName="mariadb-client-7-default" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.901818 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.914443 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 14:42:49 crc kubenswrapper[5065]: I1008 14:42:49.950100 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq4xm\" (UniqueName: \"kubernetes.io/projected/f814194e-462b-4b9f-9d6c-cc9c164b1ef4-kube-api-access-gq4xm\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.051509 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g55cz\" (UniqueName: \"kubernetes.io/projected/de51983e-a8ab-4f52-aeec-c10d320b384e-kube-api-access-g55cz\") pod \"mariadb-client-2\" (UID: \"de51983e-a8ab-4f52-aeec-c10d320b384e\") " pod="openstack/mariadb-client-2" Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.153498 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g55cz\" (UniqueName: \"kubernetes.io/projected/de51983e-a8ab-4f52-aeec-c10d320b384e-kube-api-access-g55cz\") pod \"mariadb-client-2\" (UID: \"de51983e-a8ab-4f52-aeec-c10d320b384e\") " pod="openstack/mariadb-client-2" Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.175381 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g55cz\" (UniqueName: \"kubernetes.io/projected/de51983e-a8ab-4f52-aeec-c10d320b384e-kube-api-access-g55cz\") pod \"mariadb-client-2\" (UID: \"de51983e-a8ab-4f52-aeec-c10d320b384e\") " pod="openstack/mariadb-client-2" Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.229049 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.367476 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd0e76d473d038a1ba625da4cefce99b31b5a4437eeb4a288f49947cb057f41" Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.367643 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.562533 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 14:42:50 crc kubenswrapper[5065]: W1008 14:42:50.567113 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde51983e_a8ab_4f52_aeec_c10d320b384e.slice/crio-bfd69e2d11140b03a2498132189cae2bfaf587999493230220de96e5cf35522b WatchSource:0}: Error finding container bfd69e2d11140b03a2498132189cae2bfaf587999493230220de96e5cf35522b: Status 404 returned error can't find the container with id bfd69e2d11140b03a2498132189cae2bfaf587999493230220de96e5cf35522b Oct 08 14:42:50 crc kubenswrapper[5065]: I1008 14:42:50.886569 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f814194e-462b-4b9f-9d6c-cc9c164b1ef4" path="/var/lib/kubelet/pods/f814194e-462b-4b9f-9d6c-cc9c164b1ef4/volumes" Oct 08 14:42:51 crc kubenswrapper[5065]: I1008 14:42:51.381039 5065 generic.go:334] "Generic (PLEG): container finished" podID="de51983e-a8ab-4f52-aeec-c10d320b384e" containerID="829a48c0447ffcd1089e224970101d443341247e7749360e9438cd83eb4c01a9" exitCode=0 Oct 08 14:42:51 crc kubenswrapper[5065]: I1008 14:42:51.381106 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"de51983e-a8ab-4f52-aeec-c10d320b384e","Type":"ContainerDied","Data":"829a48c0447ffcd1089e224970101d443341247e7749360e9438cd83eb4c01a9"} Oct 08 14:42:51 crc kubenswrapper[5065]: I1008 14:42:51.381181 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"de51983e-a8ab-4f52-aeec-c10d320b384e","Type":"ContainerStarted","Data":"bfd69e2d11140b03a2498132189cae2bfaf587999493230220de96e5cf35522b"} Oct 08 14:42:52 crc kubenswrapper[5065]: I1008 14:42:52.890530 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 14:42:52 crc kubenswrapper[5065]: I1008 14:42:52.910187 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_de51983e-a8ab-4f52-aeec-c10d320b384e/mariadb-client-2/0.log" Oct 08 14:42:52 crc kubenswrapper[5065]: I1008 14:42:52.939344 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 14:42:52 crc kubenswrapper[5065]: I1008 14:42:52.946377 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 14:42:53 crc kubenswrapper[5065]: I1008 14:42:53.003504 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g55cz\" (UniqueName: \"kubernetes.io/projected/de51983e-a8ab-4f52-aeec-c10d320b384e-kube-api-access-g55cz\") pod \"de51983e-a8ab-4f52-aeec-c10d320b384e\" (UID: \"de51983e-a8ab-4f52-aeec-c10d320b384e\") " Oct 08 14:42:53 crc kubenswrapper[5065]: I1008 14:42:53.023723 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de51983e-a8ab-4f52-aeec-c10d320b384e-kube-api-access-g55cz" (OuterVolumeSpecName: "kube-api-access-g55cz") pod "de51983e-a8ab-4f52-aeec-c10d320b384e" (UID: "de51983e-a8ab-4f52-aeec-c10d320b384e"). InnerVolumeSpecName "kube-api-access-g55cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:53 crc kubenswrapper[5065]: I1008 14:42:53.106782 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g55cz\" (UniqueName: \"kubernetes.io/projected/de51983e-a8ab-4f52-aeec-c10d320b384e-kube-api-access-g55cz\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:53 crc kubenswrapper[5065]: I1008 14:42:53.403047 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfd69e2d11140b03a2498132189cae2bfaf587999493230220de96e5cf35522b" Oct 08 14:42:53 crc kubenswrapper[5065]: I1008 14:42:53.403133 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 14:42:54 crc kubenswrapper[5065]: I1008 14:42:54.892826 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de51983e-a8ab-4f52-aeec-c10d320b384e" path="/var/lib/kubelet/pods/de51983e-a8ab-4f52-aeec-c10d320b384e/volumes" Oct 08 14:43:03 crc kubenswrapper[5065]: I1008 14:43:03.873354 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:43:03 crc kubenswrapper[5065]: E1008 14:43:03.874230 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.541196 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7sbzh"] Oct 08 14:43:12 crc kubenswrapper[5065]: E1008 14:43:12.542302 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de51983e-a8ab-4f52-aeec-c10d320b384e" containerName="mariadb-client-2" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.542323 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="de51983e-a8ab-4f52-aeec-c10d320b384e" containerName="mariadb-client-2" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.542625 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="de51983e-a8ab-4f52-aeec-c10d320b384e" containerName="mariadb-client-2" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.544474 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.564819 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sbzh"] Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.576634 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-utilities\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.576795 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-catalog-content\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.576931 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwjqv\" (UniqueName: \"kubernetes.io/projected/ae7284b1-ca0c-4148-a915-85761e92d40a-kube-api-access-mwjqv\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.678118 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-utilities\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.678442 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-catalog-content\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.678596 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwjqv\" (UniqueName: \"kubernetes.io/projected/ae7284b1-ca0c-4148-a915-85761e92d40a-kube-api-access-mwjqv\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.678610 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-utilities\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.678875 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-catalog-content\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.700243 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwjqv\" (UniqueName: \"kubernetes.io/projected/ae7284b1-ca0c-4148-a915-85761e92d40a-kube-api-access-mwjqv\") pod \"community-operators-7sbzh\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:12 crc kubenswrapper[5065]: I1008 14:43:12.890910 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:13 crc kubenswrapper[5065]: I1008 14:43:13.368237 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sbzh"] Oct 08 14:43:13 crc kubenswrapper[5065]: I1008 14:43:13.606475 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sbzh" event={"ID":"ae7284b1-ca0c-4148-a915-85761e92d40a","Type":"ContainerStarted","Data":"7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784"} Oct 08 14:43:13 crc kubenswrapper[5065]: I1008 14:43:13.607004 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sbzh" event={"ID":"ae7284b1-ca0c-4148-a915-85761e92d40a","Type":"ContainerStarted","Data":"3863e922ebbf445e84c2a0fe6c9d5a08a1aee1b84b180816e5183d54aa850da3"} Oct 08 14:43:14 crc kubenswrapper[5065]: I1008 14:43:14.620248 5065 generic.go:334] "Generic (PLEG): container finished" podID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerID="7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784" exitCode=0 Oct 08 14:43:14 crc kubenswrapper[5065]: I1008 14:43:14.620315 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sbzh" event={"ID":"ae7284b1-ca0c-4148-a915-85761e92d40a","Type":"ContainerDied","Data":"7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784"} Oct 08 14:43:14 crc kubenswrapper[5065]: I1008 14:43:14.874277 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:43:14 crc kubenswrapper[5065]: E1008 14:43:14.875081 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:43:16 crc kubenswrapper[5065]: I1008 14:43:16.643065 5065 generic.go:334] "Generic (PLEG): container finished" podID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerID="bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd" exitCode=0 Oct 08 14:43:16 crc kubenswrapper[5065]: I1008 14:43:16.643182 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sbzh" event={"ID":"ae7284b1-ca0c-4148-a915-85761e92d40a","Type":"ContainerDied","Data":"bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd"} Oct 08 14:43:17 crc kubenswrapper[5065]: I1008 14:43:17.657618 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sbzh" event={"ID":"ae7284b1-ca0c-4148-a915-85761e92d40a","Type":"ContainerStarted","Data":"02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7"} Oct 08 14:43:17 crc kubenswrapper[5065]: I1008 14:43:17.692126 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7sbzh" podStartSLOduration=3.19598747 podStartE2EDuration="5.692104617s" podCreationTimestamp="2025-10-08 14:43:12 +0000 UTC" firstStartedPulling="2025-10-08 14:43:14.623700577 +0000 UTC m=+5096.401082374" lastFinishedPulling="2025-10-08 14:43:17.119817724 +0000 UTC m=+5098.897199521" observedRunningTime="2025-10-08 14:43:17.6852408 +0000 UTC m=+5099.462622567" watchObservedRunningTime="2025-10-08 14:43:17.692104617 +0000 UTC m=+5099.469486384" Oct 08 14:43:22 crc kubenswrapper[5065]: I1008 14:43:22.891098 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:22 crc kubenswrapper[5065]: I1008 14:43:22.891811 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:22 crc kubenswrapper[5065]: I1008 14:43:22.958627 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:23 crc kubenswrapper[5065]: I1008 14:43:23.795595 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:23 crc kubenswrapper[5065]: I1008 14:43:23.866741 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sbzh"] Oct 08 14:43:25 crc kubenswrapper[5065]: I1008 14:43:25.738382 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7sbzh" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="registry-server" containerID="cri-o://02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7" gracePeriod=2 Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.156832 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.250484 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-catalog-content\") pod \"ae7284b1-ca0c-4148-a915-85761e92d40a\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.250604 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwjqv\" (UniqueName: \"kubernetes.io/projected/ae7284b1-ca0c-4148-a915-85761e92d40a-kube-api-access-mwjqv\") pod \"ae7284b1-ca0c-4148-a915-85761e92d40a\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.250676 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-utilities\") pod \"ae7284b1-ca0c-4148-a915-85761e92d40a\" (UID: \"ae7284b1-ca0c-4148-a915-85761e92d40a\") " Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.252402 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-utilities" (OuterVolumeSpecName: "utilities") pod "ae7284b1-ca0c-4148-a915-85761e92d40a" (UID: "ae7284b1-ca0c-4148-a915-85761e92d40a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.256922 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7284b1-ca0c-4148-a915-85761e92d40a-kube-api-access-mwjqv" (OuterVolumeSpecName: "kube-api-access-mwjqv") pod "ae7284b1-ca0c-4148-a915-85761e92d40a" (UID: "ae7284b1-ca0c-4148-a915-85761e92d40a"). InnerVolumeSpecName "kube-api-access-mwjqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.319947 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae7284b1-ca0c-4148-a915-85761e92d40a" (UID: "ae7284b1-ca0c-4148-a915-85761e92d40a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.353056 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.353107 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwjqv\" (UniqueName: \"kubernetes.io/projected/ae7284b1-ca0c-4148-a915-85761e92d40a-kube-api-access-mwjqv\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.353125 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae7284b1-ca0c-4148-a915-85761e92d40a-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.749146 5065 generic.go:334] "Generic (PLEG): container finished" podID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerID="02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7" exitCode=0 Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.749228 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sbzh" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.749280 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sbzh" event={"ID":"ae7284b1-ca0c-4148-a915-85761e92d40a","Type":"ContainerDied","Data":"02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7"} Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.749341 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sbzh" event={"ID":"ae7284b1-ca0c-4148-a915-85761e92d40a","Type":"ContainerDied","Data":"3863e922ebbf445e84c2a0fe6c9d5a08a1aee1b84b180816e5183d54aa850da3"} Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.749371 5065 scope.go:117] "RemoveContainer" containerID="02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.772514 5065 scope.go:117] "RemoveContainer" containerID="bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.791627 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sbzh"] Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.800466 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7sbzh"] Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.802033 5065 scope.go:117] "RemoveContainer" containerID="7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.843196 5065 scope.go:117] "RemoveContainer" containerID="02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7" Oct 08 14:43:26 crc kubenswrapper[5065]: E1008 14:43:26.843639 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7\": container with ID starting with 02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7 not found: ID does not exist" containerID="02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.843668 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7"} err="failed to get container status \"02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7\": rpc error: code = NotFound desc = could not find container \"02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7\": container with ID starting with 02f052606f27b66727d1bb516392f0cf44b7ba059a4919e428d3cbfd28c696a7 not found: ID does not exist" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.843687 5065 scope.go:117] "RemoveContainer" containerID="bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd" Oct 08 14:43:26 crc kubenswrapper[5065]: E1008 14:43:26.843949 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd\": container with ID starting with bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd not found: ID does not exist" containerID="bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.843974 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd"} err="failed to get container status \"bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd\": rpc error: code = NotFound desc = could not find container \"bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd\": container with ID starting with bf7ff65eea0afeb2bf5fb275868f59f9d34b2afe80370b552c2bf256113333fd not found: ID does not exist" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.843988 5065 scope.go:117] "RemoveContainer" containerID="7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784" Oct 08 14:43:26 crc kubenswrapper[5065]: E1008 14:43:26.844195 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784\": container with ID starting with 7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784 not found: ID does not exist" containerID="7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.844219 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784"} err="failed to get container status \"7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784\": rpc error: code = NotFound desc = could not find container \"7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784\": container with ID starting with 7f543e2f567df7a28757c3e322d1f9ba009c7fed977ece14a657d3ac17d27784 not found: ID does not exist" Oct 08 14:43:26 crc kubenswrapper[5065]: I1008 14:43:26.882233 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" path="/var/lib/kubelet/pods/ae7284b1-ca0c-4148-a915-85761e92d40a/volumes" Oct 08 14:43:27 crc kubenswrapper[5065]: I1008 14:43:27.873640 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:43:27 crc kubenswrapper[5065]: E1008 14:43:27.873986 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:43:41 crc kubenswrapper[5065]: I1008 14:43:41.873329 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:43:41 crc kubenswrapper[5065]: E1008 14:43:41.874174 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:43:54 crc kubenswrapper[5065]: I1008 14:43:54.874841 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:43:54 crc kubenswrapper[5065]: E1008 14:43:54.876041 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:44:08 crc kubenswrapper[5065]: I1008 14:44:08.887586 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:44:08 crc kubenswrapper[5065]: E1008 14:44:08.889211 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:44:21 crc kubenswrapper[5065]: I1008 14:44:21.874002 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:44:21 crc kubenswrapper[5065]: E1008 14:44:21.875308 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:44:29 crc kubenswrapper[5065]: I1008 14:44:29.804826 5065 scope.go:117] "RemoveContainer" containerID="ace567a042dc9f3d8ce74137b4ca2be1a3d5b1f0345433a115ff2e898da07872" Oct 08 14:44:33 crc kubenswrapper[5065]: I1008 14:44:33.874022 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:44:33 crc kubenswrapper[5065]: E1008 14:44:33.875701 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:44:47 crc kubenswrapper[5065]: I1008 14:44:47.873354 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:44:47 crc kubenswrapper[5065]: E1008 14:44:47.874392 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:44:59 crc kubenswrapper[5065]: I1008 14:44:59.874107 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:44:59 crc kubenswrapper[5065]: E1008 14:44:59.875220 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.149774 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc"] Oct 08 14:45:00 crc kubenswrapper[5065]: E1008 14:45:00.151394 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="registry-server" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.151444 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="registry-server" Oct 08 14:45:00 crc kubenswrapper[5065]: E1008 14:45:00.151473 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="extract-content" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.151485 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="extract-content" Oct 08 14:45:00 crc kubenswrapper[5065]: E1008 14:45:00.151533 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="extract-utilities" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.151546 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="extract-utilities" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.151818 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae7284b1-ca0c-4148-a915-85761e92d40a" containerName="registry-server" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.152610 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.154903 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.154950 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.156340 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc"] Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.288219 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f08da538-ea61-4994-b115-5e3276f87fc1-secret-volume\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.288297 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f08da538-ea61-4994-b115-5e3276f87fc1-config-volume\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.288315 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/f08da538-ea61-4994-b115-5e3276f87fc1-kube-api-access-9bbl5\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.389450 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f08da538-ea61-4994-b115-5e3276f87fc1-secret-volume\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.389569 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f08da538-ea61-4994-b115-5e3276f87fc1-config-volume\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.389592 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/f08da538-ea61-4994-b115-5e3276f87fc1-kube-api-access-9bbl5\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.390879 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f08da538-ea61-4994-b115-5e3276f87fc1-config-volume\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.396655 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f08da538-ea61-4994-b115-5e3276f87fc1-secret-volume\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.406192 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/f08da538-ea61-4994-b115-5e3276f87fc1-kube-api-access-9bbl5\") pod \"collect-profiles-29332245-89rbc\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.474258 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:00 crc kubenswrapper[5065]: I1008 14:45:00.902718 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc"] Oct 08 14:45:01 crc kubenswrapper[5065]: I1008 14:45:01.621218 5065 generic.go:334] "Generic (PLEG): container finished" podID="f08da538-ea61-4994-b115-5e3276f87fc1" containerID="168fd02b1178403084d69d4f4019d48a1c3f20bdf3b3053549576dd47ceef1dc" exitCode=0 Oct 08 14:45:01 crc kubenswrapper[5065]: I1008 14:45:01.621475 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" event={"ID":"f08da538-ea61-4994-b115-5e3276f87fc1","Type":"ContainerDied","Data":"168fd02b1178403084d69d4f4019d48a1c3f20bdf3b3053549576dd47ceef1dc"} Oct 08 14:45:01 crc kubenswrapper[5065]: I1008 14:45:01.621891 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" event={"ID":"f08da538-ea61-4994-b115-5e3276f87fc1","Type":"ContainerStarted","Data":"32703e64eef03b4ee264c25c6f7b9a84433f03bb9d65069d3834f14f1ab0fd44"} Oct 08 14:45:02 crc kubenswrapper[5065]: I1008 14:45:02.943424 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.030624 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f08da538-ea61-4994-b115-5e3276f87fc1-config-volume\") pod \"f08da538-ea61-4994-b115-5e3276f87fc1\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.030685 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/f08da538-ea61-4994-b115-5e3276f87fc1-kube-api-access-9bbl5\") pod \"f08da538-ea61-4994-b115-5e3276f87fc1\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.030857 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f08da538-ea61-4994-b115-5e3276f87fc1-secret-volume\") pod \"f08da538-ea61-4994-b115-5e3276f87fc1\" (UID: \"f08da538-ea61-4994-b115-5e3276f87fc1\") " Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.031268 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f08da538-ea61-4994-b115-5e3276f87fc1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f08da538-ea61-4994-b115-5e3276f87fc1" (UID: "f08da538-ea61-4994-b115-5e3276f87fc1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.035557 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08da538-ea61-4994-b115-5e3276f87fc1-kube-api-access-9bbl5" (OuterVolumeSpecName: "kube-api-access-9bbl5") pod "f08da538-ea61-4994-b115-5e3276f87fc1" (UID: "f08da538-ea61-4994-b115-5e3276f87fc1"). InnerVolumeSpecName "kube-api-access-9bbl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.035878 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f08da538-ea61-4994-b115-5e3276f87fc1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f08da538-ea61-4994-b115-5e3276f87fc1" (UID: "f08da538-ea61-4994-b115-5e3276f87fc1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.132479 5065 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f08da538-ea61-4994-b115-5e3276f87fc1-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.132513 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/f08da538-ea61-4994-b115-5e3276f87fc1-kube-api-access-9bbl5\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.132525 5065 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f08da538-ea61-4994-b115-5e3276f87fc1-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.642830 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" event={"ID":"f08da538-ea61-4994-b115-5e3276f87fc1","Type":"ContainerDied","Data":"32703e64eef03b4ee264c25c6f7b9a84433f03bb9d65069d3834f14f1ab0fd44"} Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.642867 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32703e64eef03b4ee264c25c6f7b9a84433f03bb9d65069d3834f14f1ab0fd44" Oct 08 14:45:03 crc kubenswrapper[5065]: I1008 14:45:03.642889 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-89rbc" Oct 08 14:45:04 crc kubenswrapper[5065]: I1008 14:45:04.018913 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm"] Oct 08 14:45:04 crc kubenswrapper[5065]: I1008 14:45:04.029476 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332200-4qqcm"] Oct 08 14:45:04 crc kubenswrapper[5065]: I1008 14:45:04.887205 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c" path="/var/lib/kubelet/pods/b2c2c914-8d87-4d0f-9bb0-c7c46ba4d31c/volumes" Oct 08 14:45:10 crc kubenswrapper[5065]: I1008 14:45:10.873386 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:45:10 crc kubenswrapper[5065]: E1008 14:45:10.874091 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:45:23 crc kubenswrapper[5065]: I1008 14:45:23.873366 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:45:23 crc kubenswrapper[5065]: E1008 14:45:23.875474 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:45:29 crc kubenswrapper[5065]: I1008 14:45:29.883735 5065 scope.go:117] "RemoveContainer" containerID="6660f71acee9e3e8025ca3a789ebd33b14a8cc76a9c0d07cfabcaa12089a15dc" Oct 08 14:45:38 crc kubenswrapper[5065]: I1008 14:45:38.879911 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:45:38 crc kubenswrapper[5065]: E1008 14:45:38.881135 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:45:53 crc kubenswrapper[5065]: I1008 14:45:53.873946 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:45:53 crc kubenswrapper[5065]: E1008 14:45:53.874959 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:46:08 crc kubenswrapper[5065]: I1008 14:46:08.877062 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:46:09 crc kubenswrapper[5065]: I1008 14:46:09.289501 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"6a2d5920533fb1103902f95c6619956e7541ada7a9d91b36d2c215022ce77856"} Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.122470 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 14:46:17 crc kubenswrapper[5065]: E1008 14:46:17.123969 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08da538-ea61-4994-b115-5e3276f87fc1" containerName="collect-profiles" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.124007 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08da538-ea61-4994-b115-5e3276f87fc1" containerName="collect-profiles" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.124320 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08da538-ea61-4994-b115-5e3276f87fc1" containerName="collect-profiles" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.125317 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.128112 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wwbk7" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.155166 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.228367 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt256\" (UniqueName: \"kubernetes.io/projected/4ec8b13a-eddf-4335-9a1d-417b6ac0c923-kube-api-access-rt256\") pod \"mariadb-copy-data\" (UID: \"4ec8b13a-eddf-4335-9a1d-417b6ac0c923\") " pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.228615 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\") pod \"mariadb-copy-data\" (UID: \"4ec8b13a-eddf-4335-9a1d-417b6ac0c923\") " pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.330528 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt256\" (UniqueName: \"kubernetes.io/projected/4ec8b13a-eddf-4335-9a1d-417b6ac0c923-kube-api-access-rt256\") pod \"mariadb-copy-data\" (UID: \"4ec8b13a-eddf-4335-9a1d-417b6ac0c923\") " pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.330680 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\") pod \"mariadb-copy-data\" (UID: \"4ec8b13a-eddf-4335-9a1d-417b6ac0c923\") " pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.334533 5065 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.334572 5065 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\") pod \"mariadb-copy-data\" (UID: \"4ec8b13a-eddf-4335-9a1d-417b6ac0c923\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/940b0fe3ab920966ba0946cbdeb28d838c884d441255da03da797575113d561a/globalmount\"" pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.370398 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt256\" (UniqueName: \"kubernetes.io/projected/4ec8b13a-eddf-4335-9a1d-417b6ac0c923-kube-api-access-rt256\") pod \"mariadb-copy-data\" (UID: \"4ec8b13a-eddf-4335-9a1d-417b6ac0c923\") " pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.385571 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e73aa46f-ab2f-45ea-9cd8-889529a3ec9b\") pod \"mariadb-copy-data\" (UID: \"4ec8b13a-eddf-4335-9a1d-417b6ac0c923\") " pod="openstack/mariadb-copy-data" Oct 08 14:46:17 crc kubenswrapper[5065]: I1008 14:46:17.448633 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 08 14:46:18 crc kubenswrapper[5065]: I1008 14:46:18.137351 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 14:46:18 crc kubenswrapper[5065]: I1008 14:46:18.393662 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"4ec8b13a-eddf-4335-9a1d-417b6ac0c923","Type":"ContainerStarted","Data":"fe7d6298c1f51ced2465fe9c3c2d048f31e3b065a0ac7140edf37002c5e47cf4"} Oct 08 14:46:18 crc kubenswrapper[5065]: I1008 14:46:18.393930 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"4ec8b13a-eddf-4335-9a1d-417b6ac0c923","Type":"ContainerStarted","Data":"db644856b3f54ac6de932dae61597c5c823b1f13aa0d59dbf24b632a2f0061ec"} Oct 08 14:46:18 crc kubenswrapper[5065]: I1008 14:46:18.417931 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.417911888 podStartE2EDuration="2.417911888s" podCreationTimestamp="2025-10-08 14:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:46:18.411170714 +0000 UTC m=+5280.188552511" watchObservedRunningTime="2025-10-08 14:46:18.417911888 +0000 UTC m=+5280.195293655" Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.170933 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.173955 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.197534 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.292517 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnd8n\" (UniqueName: \"kubernetes.io/projected/6e8d539e-71d7-4753-b30e-28f7d1afba55-kube-api-access-wnd8n\") pod \"mariadb-client\" (UID: \"6e8d539e-71d7-4753-b30e-28f7d1afba55\") " pod="openstack/mariadb-client" Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.398476 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnd8n\" (UniqueName: \"kubernetes.io/projected/6e8d539e-71d7-4753-b30e-28f7d1afba55-kube-api-access-wnd8n\") pod \"mariadb-client\" (UID: \"6e8d539e-71d7-4753-b30e-28f7d1afba55\") " pod="openstack/mariadb-client" Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.427111 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnd8n\" (UniqueName: \"kubernetes.io/projected/6e8d539e-71d7-4753-b30e-28f7d1afba55-kube-api-access-wnd8n\") pod \"mariadb-client\" (UID: \"6e8d539e-71d7-4753-b30e-28f7d1afba55\") " pod="openstack/mariadb-client" Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.515801 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:20 crc kubenswrapper[5065]: I1008 14:46:20.968615 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:20 crc kubenswrapper[5065]: W1008 14:46:20.974434 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e8d539e_71d7_4753_b30e_28f7d1afba55.slice/crio-07d14cd1972bd7d2c87aa3c82458a3f341e43920a0bdc5c52fe060982222c950 WatchSource:0}: Error finding container 07d14cd1972bd7d2c87aa3c82458a3f341e43920a0bdc5c52fe060982222c950: Status 404 returned error can't find the container with id 07d14cd1972bd7d2c87aa3c82458a3f341e43920a0bdc5c52fe060982222c950 Oct 08 14:46:21 crc kubenswrapper[5065]: I1008 14:46:21.427337 5065 generic.go:334] "Generic (PLEG): container finished" podID="6e8d539e-71d7-4753-b30e-28f7d1afba55" containerID="f67cebe2ee0e925b1883ac7e3f9363e64856c54f01a9d2ac67c3720768f058d1" exitCode=0 Oct 08 14:46:21 crc kubenswrapper[5065]: I1008 14:46:21.427433 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6e8d539e-71d7-4753-b30e-28f7d1afba55","Type":"ContainerDied","Data":"f67cebe2ee0e925b1883ac7e3f9363e64856c54f01a9d2ac67c3720768f058d1"} Oct 08 14:46:21 crc kubenswrapper[5065]: I1008 14:46:21.427735 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6e8d539e-71d7-4753-b30e-28f7d1afba55","Type":"ContainerStarted","Data":"07d14cd1972bd7d2c87aa3c82458a3f341e43920a0bdc5c52fe060982222c950"} Oct 08 14:46:22 crc kubenswrapper[5065]: I1008 14:46:22.917032 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:22 crc kubenswrapper[5065]: I1008 14:46:22.939682 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_6e8d539e-71d7-4753-b30e-28f7d1afba55/mariadb-client/0.log" Oct 08 14:46:22 crc kubenswrapper[5065]: I1008 14:46:22.966959 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:22 crc kubenswrapper[5065]: I1008 14:46:22.974180 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.044731 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnd8n\" (UniqueName: \"kubernetes.io/projected/6e8d539e-71d7-4753-b30e-28f7d1afba55-kube-api-access-wnd8n\") pod \"6e8d539e-71d7-4753-b30e-28f7d1afba55\" (UID: \"6e8d539e-71d7-4753-b30e-28f7d1afba55\") " Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.053922 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8d539e-71d7-4753-b30e-28f7d1afba55-kube-api-access-wnd8n" (OuterVolumeSpecName: "kube-api-access-wnd8n") pod "6e8d539e-71d7-4753-b30e-28f7d1afba55" (UID: "6e8d539e-71d7-4753-b30e-28f7d1afba55"). InnerVolumeSpecName "kube-api-access-wnd8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.105636 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:23 crc kubenswrapper[5065]: E1008 14:46:23.106245 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8d539e-71d7-4753-b30e-28f7d1afba55" containerName="mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.106286 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8d539e-71d7-4753-b30e-28f7d1afba55" containerName="mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.106727 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8d539e-71d7-4753-b30e-28f7d1afba55" containerName="mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.107900 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.132006 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.147695 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnd8n\" (UniqueName: \"kubernetes.io/projected/6e8d539e-71d7-4753-b30e-28f7d1afba55-kube-api-access-wnd8n\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.249209 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2t4\" (UniqueName: \"kubernetes.io/projected/5cffb22f-1be6-4cca-88e0-b6963dc52a88-kube-api-access-bk2t4\") pod \"mariadb-client\" (UID: \"5cffb22f-1be6-4cca-88e0-b6963dc52a88\") " pod="openstack/mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.351844 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2t4\" (UniqueName: \"kubernetes.io/projected/5cffb22f-1be6-4cca-88e0-b6963dc52a88-kube-api-access-bk2t4\") pod \"mariadb-client\" (UID: \"5cffb22f-1be6-4cca-88e0-b6963dc52a88\") " pod="openstack/mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.383354 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2t4\" (UniqueName: \"kubernetes.io/projected/5cffb22f-1be6-4cca-88e0-b6963dc52a88-kube-api-access-bk2t4\") pod \"mariadb-client\" (UID: \"5cffb22f-1be6-4cca-88e0-b6963dc52a88\") " pod="openstack/mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.434716 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.453567 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d14cd1972bd7d2c87aa3c82458a3f341e43920a0bdc5c52fe060982222c950" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.453631 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.507338 5065 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="6e8d539e-71d7-4753-b30e-28f7d1afba55" podUID="5cffb22f-1be6-4cca-88e0-b6963dc52a88" Oct 08 14:46:23 crc kubenswrapper[5065]: I1008 14:46:23.707310 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:23 crc kubenswrapper[5065]: W1008 14:46:23.709520 5065 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cffb22f_1be6_4cca_88e0_b6963dc52a88.slice/crio-95679e53774391262a837625337e6758ccf0c20719706c612df9e074c65a298e WatchSource:0}: Error finding container 95679e53774391262a837625337e6758ccf0c20719706c612df9e074c65a298e: Status 404 returned error can't find the container with id 95679e53774391262a837625337e6758ccf0c20719706c612df9e074c65a298e Oct 08 14:46:24 crc kubenswrapper[5065]: I1008 14:46:24.467516 5065 generic.go:334] "Generic (PLEG): container finished" podID="5cffb22f-1be6-4cca-88e0-b6963dc52a88" containerID="2fb5cc662f35430723733128b4b88ce80dac8adfd2af77d2ae37d4a42c49734a" exitCode=0 Oct 08 14:46:24 crc kubenswrapper[5065]: I1008 14:46:24.467655 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"5cffb22f-1be6-4cca-88e0-b6963dc52a88","Type":"ContainerDied","Data":"2fb5cc662f35430723733128b4b88ce80dac8adfd2af77d2ae37d4a42c49734a"} Oct 08 14:46:24 crc kubenswrapper[5065]: I1008 14:46:24.468049 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"5cffb22f-1be6-4cca-88e0-b6963dc52a88","Type":"ContainerStarted","Data":"95679e53774391262a837625337e6758ccf0c20719706c612df9e074c65a298e"} Oct 08 14:46:24 crc kubenswrapper[5065]: I1008 14:46:24.895529 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8d539e-71d7-4753-b30e-28f7d1afba55" path="/var/lib/kubelet/pods/6e8d539e-71d7-4753-b30e-28f7d1afba55/volumes" Oct 08 14:46:25 crc kubenswrapper[5065]: I1008 14:46:25.871857 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:25 crc kubenswrapper[5065]: I1008 14:46:25.889756 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_5cffb22f-1be6-4cca-88e0-b6963dc52a88/mariadb-client/0.log" Oct 08 14:46:25 crc kubenswrapper[5065]: I1008 14:46:25.915600 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:25 crc kubenswrapper[5065]: I1008 14:46:25.920609 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 08 14:46:26 crc kubenswrapper[5065]: I1008 14:46:26.001601 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk2t4\" (UniqueName: \"kubernetes.io/projected/5cffb22f-1be6-4cca-88e0-b6963dc52a88-kube-api-access-bk2t4\") pod \"5cffb22f-1be6-4cca-88e0-b6963dc52a88\" (UID: \"5cffb22f-1be6-4cca-88e0-b6963dc52a88\") " Oct 08 14:46:26 crc kubenswrapper[5065]: I1008 14:46:26.008920 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cffb22f-1be6-4cca-88e0-b6963dc52a88-kube-api-access-bk2t4" (OuterVolumeSpecName: "kube-api-access-bk2t4") pod "5cffb22f-1be6-4cca-88e0-b6963dc52a88" (UID: "5cffb22f-1be6-4cca-88e0-b6963dc52a88"). InnerVolumeSpecName "kube-api-access-bk2t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:26 crc kubenswrapper[5065]: I1008 14:46:26.104122 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk2t4\" (UniqueName: \"kubernetes.io/projected/5cffb22f-1be6-4cca-88e0-b6963dc52a88-kube-api-access-bk2t4\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:26 crc kubenswrapper[5065]: I1008 14:46:26.493212 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95679e53774391262a837625337e6758ccf0c20719706c612df9e074c65a298e" Oct 08 14:46:26 crc kubenswrapper[5065]: I1008 14:46:26.493310 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 14:46:26 crc kubenswrapper[5065]: I1008 14:46:26.891554 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cffb22f-1be6-4cca-88e0-b6963dc52a88" path="/var/lib/kubelet/pods/5cffb22f-1be6-4cca-88e0-b6963dc52a88/volumes" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.494521 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qjgvh"] Oct 08 14:47:23 crc kubenswrapper[5065]: E1008 14:47:23.495272 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cffb22f-1be6-4cca-88e0-b6963dc52a88" containerName="mariadb-client" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.495286 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cffb22f-1be6-4cca-88e0-b6963dc52a88" containerName="mariadb-client" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.495505 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cffb22f-1be6-4cca-88e0-b6963dc52a88" containerName="mariadb-client" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.496885 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.516119 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjgvh"] Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.662847 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-utilities\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.663007 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9fv9\" (UniqueName: \"kubernetes.io/projected/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-kube-api-access-d9fv9\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.663077 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-catalog-content\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.764670 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9fv9\" (UniqueName: \"kubernetes.io/projected/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-kube-api-access-d9fv9\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.765530 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-catalog-content\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.765704 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-utilities\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.766135 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-catalog-content\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.766479 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-utilities\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.799176 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9fv9\" (UniqueName: \"kubernetes.io/projected/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-kube-api-access-d9fv9\") pod \"certified-operators-qjgvh\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:23 crc kubenswrapper[5065]: I1008 14:47:23.834573 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:24 crc kubenswrapper[5065]: I1008 14:47:24.295363 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjgvh"] Oct 08 14:47:25 crc kubenswrapper[5065]: I1008 14:47:25.034765 5065 generic.go:334] "Generic (PLEG): container finished" podID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerID="6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870" exitCode=0 Oct 08 14:47:25 crc kubenswrapper[5065]: I1008 14:47:25.034823 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjgvh" event={"ID":"3f8bf27b-f58f-4691-81c8-41f672b4ea4a","Type":"ContainerDied","Data":"6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870"} Oct 08 14:47:25 crc kubenswrapper[5065]: I1008 14:47:25.034852 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjgvh" event={"ID":"3f8bf27b-f58f-4691-81c8-41f672b4ea4a","Type":"ContainerStarted","Data":"094ef502a1283e99b3d05d3c5835a3839204f088351b8e7b349584fdb7b0d934"} Oct 08 14:47:25 crc kubenswrapper[5065]: I1008 14:47:25.037100 5065 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:47:26 crc kubenswrapper[5065]: I1008 14:47:26.045996 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjgvh" event={"ID":"3f8bf27b-f58f-4691-81c8-41f672b4ea4a","Type":"ContainerStarted","Data":"b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49"} Oct 08 14:47:27 crc kubenswrapper[5065]: I1008 14:47:27.055910 5065 generic.go:334] "Generic (PLEG): container finished" podID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerID="b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49" exitCode=0 Oct 08 14:47:27 crc kubenswrapper[5065]: I1008 14:47:27.055954 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjgvh" event={"ID":"3f8bf27b-f58f-4691-81c8-41f672b4ea4a","Type":"ContainerDied","Data":"b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49"} Oct 08 14:47:28 crc kubenswrapper[5065]: I1008 14:47:28.064174 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjgvh" event={"ID":"3f8bf27b-f58f-4691-81c8-41f672b4ea4a","Type":"ContainerStarted","Data":"e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1"} Oct 08 14:47:33 crc kubenswrapper[5065]: I1008 14:47:33.835098 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:33 crc kubenswrapper[5065]: I1008 14:47:33.835774 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:33 crc kubenswrapper[5065]: I1008 14:47:33.914641 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:33 crc kubenswrapper[5065]: I1008 14:47:33.958869 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qjgvh" podStartSLOduration=8.465639785 podStartE2EDuration="10.958832853s" podCreationTimestamp="2025-10-08 14:47:23 +0000 UTC" firstStartedPulling="2025-10-08 14:47:25.036837096 +0000 UTC m=+5346.814218863" lastFinishedPulling="2025-10-08 14:47:27.530030174 +0000 UTC m=+5349.307411931" observedRunningTime="2025-10-08 14:47:28.086368403 +0000 UTC m=+5349.863750160" watchObservedRunningTime="2025-10-08 14:47:33.958832853 +0000 UTC m=+5355.736214650" Oct 08 14:47:34 crc kubenswrapper[5065]: I1008 14:47:34.181117 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:34 crc kubenswrapper[5065]: I1008 14:47:34.249403 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjgvh"] Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.128719 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qjgvh" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="registry-server" containerID="cri-o://e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1" gracePeriod=2 Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.563246 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.681143 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-catalog-content\") pod \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.681274 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-utilities\") pod \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.681300 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9fv9\" (UniqueName: \"kubernetes.io/projected/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-kube-api-access-d9fv9\") pod \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\" (UID: \"3f8bf27b-f58f-4691-81c8-41f672b4ea4a\") " Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.682485 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-utilities" (OuterVolumeSpecName: "utilities") pod "3f8bf27b-f58f-4691-81c8-41f672b4ea4a" (UID: "3f8bf27b-f58f-4691-81c8-41f672b4ea4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.682679 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.688226 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-kube-api-access-d9fv9" (OuterVolumeSpecName: "kube-api-access-d9fv9") pod "3f8bf27b-f58f-4691-81c8-41f672b4ea4a" (UID: "3f8bf27b-f58f-4691-81c8-41f672b4ea4a"). InnerVolumeSpecName "kube-api-access-d9fv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:47:36 crc kubenswrapper[5065]: I1008 14:47:36.784272 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9fv9\" (UniqueName: \"kubernetes.io/projected/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-kube-api-access-d9fv9\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.139324 5065 generic.go:334] "Generic (PLEG): container finished" podID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerID="e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1" exitCode=0 Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.139370 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjgvh" event={"ID":"3f8bf27b-f58f-4691-81c8-41f672b4ea4a","Type":"ContainerDied","Data":"e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1"} Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.139388 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjgvh" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.139401 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjgvh" event={"ID":"3f8bf27b-f58f-4691-81c8-41f672b4ea4a","Type":"ContainerDied","Data":"094ef502a1283e99b3d05d3c5835a3839204f088351b8e7b349584fdb7b0d934"} Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.139445 5065 scope.go:117] "RemoveContainer" containerID="e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.159561 5065 scope.go:117] "RemoveContainer" containerID="b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.184606 5065 scope.go:117] "RemoveContainer" containerID="6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.215613 5065 scope.go:117] "RemoveContainer" containerID="e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1" Oct 08 14:47:37 crc kubenswrapper[5065]: E1008 14:47:37.216262 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1\": container with ID starting with e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1 not found: ID does not exist" containerID="e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.216334 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1"} err="failed to get container status \"e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1\": rpc error: code = NotFound desc = could not find container \"e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1\": container with ID starting with e3360f9c998f2f84cdef1c9495b6883dabffb5bc05af09e8afcbb6a0d7d4c4d1 not found: ID does not exist" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.216376 5065 scope.go:117] "RemoveContainer" containerID="b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49" Oct 08 14:47:37 crc kubenswrapper[5065]: E1008 14:47:37.216896 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49\": container with ID starting with b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49 not found: ID does not exist" containerID="b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.216947 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49"} err="failed to get container status \"b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49\": rpc error: code = NotFound desc = could not find container \"b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49\": container with ID starting with b47966a5ed0f29fd1a8b9af9c1b143dd466fcd7fd416829e5a4a2bf24ff49b49 not found: ID does not exist" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.216984 5065 scope.go:117] "RemoveContainer" containerID="6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870" Oct 08 14:47:37 crc kubenswrapper[5065]: E1008 14:47:37.217842 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870\": container with ID starting with 6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870 not found: ID does not exist" containerID="6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.217893 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870"} err="failed to get container status \"6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870\": rpc error: code = NotFound desc = could not find container \"6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870\": container with ID starting with 6e52c01d153288f8eb491dbb001c86b1d43d36855649f15a1f8d172fe55df870 not found: ID does not exist" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.304948 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f8bf27b-f58f-4691-81c8-41f672b4ea4a" (UID: "3f8bf27b-f58f-4691-81c8-41f672b4ea4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.393151 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f8bf27b-f58f-4691-81c8-41f672b4ea4a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.490145 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjgvh"] Oct 08 14:47:37 crc kubenswrapper[5065]: I1008 14:47:37.501855 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qjgvh"] Oct 08 14:47:38 crc kubenswrapper[5065]: I1008 14:47:38.891969 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" path="/var/lib/kubelet/pods/3f8bf27b-f58f-4691-81c8-41f672b4ea4a/volumes" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.091757 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-m8x8w/must-gather-kxttl"] Oct 08 14:48:00 crc kubenswrapper[5065]: E1008 14:48:00.092589 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="extract-content" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.092602 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="extract-content" Oct 08 14:48:00 crc kubenswrapper[5065]: E1008 14:48:00.092624 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="extract-utilities" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.092631 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="extract-utilities" Oct 08 14:48:00 crc kubenswrapper[5065]: E1008 14:48:00.092646 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="registry-server" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.092652 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="registry-server" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.092793 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8bf27b-f58f-4691-81c8-41f672b4ea4a" containerName="registry-server" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.093720 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.095912 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-m8x8w"/"openshift-service-ca.crt" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.098320 5065 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-m8x8w"/"default-dockercfg-wdsbj" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.098560 5065 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-m8x8w"/"kube-root-ca.crt" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.102440 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-m8x8w/must-gather-kxttl"] Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.123694 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqgn\" (UniqueName: \"kubernetes.io/projected/8fa21b44-d5b3-4149-a902-5e46b10f3318-kube-api-access-vqqgn\") pod \"must-gather-kxttl\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.123879 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fa21b44-d5b3-4149-a902-5e46b10f3318-must-gather-output\") pod \"must-gather-kxttl\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.225856 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fa21b44-d5b3-4149-a902-5e46b10f3318-must-gather-output\") pod \"must-gather-kxttl\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.225990 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqgn\" (UniqueName: \"kubernetes.io/projected/8fa21b44-d5b3-4149-a902-5e46b10f3318-kube-api-access-vqqgn\") pod \"must-gather-kxttl\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.226579 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fa21b44-d5b3-4149-a902-5e46b10f3318-must-gather-output\") pod \"must-gather-kxttl\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.248109 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqgn\" (UniqueName: \"kubernetes.io/projected/8fa21b44-d5b3-4149-a902-5e46b10f3318-kube-api-access-vqqgn\") pod \"must-gather-kxttl\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.412245 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:48:00 crc kubenswrapper[5065]: I1008 14:48:00.648614 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-m8x8w/must-gather-kxttl"] Oct 08 14:48:01 crc kubenswrapper[5065]: I1008 14:48:01.361280 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-m8x8w/must-gather-kxttl" event={"ID":"8fa21b44-d5b3-4149-a902-5e46b10f3318","Type":"ContainerStarted","Data":"c8ba5084fa605f1c20fed725995a62816f9fca07a0566bbd0a0e4816cbd25877"} Oct 08 14:48:05 crc kubenswrapper[5065]: I1008 14:48:05.410223 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-m8x8w/must-gather-kxttl" event={"ID":"8fa21b44-d5b3-4149-a902-5e46b10f3318","Type":"ContainerStarted","Data":"4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202"} Oct 08 14:48:05 crc kubenswrapper[5065]: I1008 14:48:05.411022 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-m8x8w/must-gather-kxttl" event={"ID":"8fa21b44-d5b3-4149-a902-5e46b10f3318","Type":"ContainerStarted","Data":"7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa"} Oct 08 14:48:05 crc kubenswrapper[5065]: I1008 14:48:05.446071 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-m8x8w/must-gather-kxttl" podStartSLOduration=1.7773926599999998 podStartE2EDuration="5.446047833s" podCreationTimestamp="2025-10-08 14:48:00 +0000 UTC" firstStartedPulling="2025-10-08 14:48:00.662684236 +0000 UTC m=+5382.440065993" lastFinishedPulling="2025-10-08 14:48:04.331339409 +0000 UTC m=+5386.108721166" observedRunningTime="2025-10-08 14:48:05.438464126 +0000 UTC m=+5387.215845903" watchObservedRunningTime="2025-10-08 14:48:05.446047833 +0000 UTC m=+5387.223429610" Oct 08 14:48:24 crc kubenswrapper[5065]: I1008 14:48:24.375471 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:48:24 crc kubenswrapper[5065]: I1008 14:48:24.375975 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:48:29 crc kubenswrapper[5065]: I1008 14:48:29.995054 5065 scope.go:117] "RemoveContainer" containerID="1f1eff6ef60a9b385d4f9040f27c01b82b8e4e6428c6faa8f5190c6a06d36975" Oct 08 14:48:43 crc kubenswrapper[5065]: I1008 14:48:43.602609 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fdc957c47-6rk7g_e9acc6a9-cad9-42ff-9832-b305696f1785/init/0.log" Oct 08 14:48:43 crc kubenswrapper[5065]: I1008 14:48:43.785794 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fdc957c47-6rk7g_e9acc6a9-cad9-42ff-9832-b305696f1785/init/0.log" Oct 08 14:48:43 crc kubenswrapper[5065]: I1008 14:48:43.791286 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5fdc957c47-6rk7g_e9acc6a9-cad9-42ff-9832-b305696f1785/dnsmasq-dns/0.log" Oct 08 14:48:43 crc kubenswrapper[5065]: I1008 14:48:43.949429 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_4ec8b13a-eddf-4335-9a1d-417b6ac0c923/adoption/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.004137 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c3557dec-b0d7-43ef-b8e2-eab685138bc6/memcached/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.154317 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_924dec42-f6c3-4827-9944-b654fd9268ee/mysql-bootstrap/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.286376 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_924dec42-f6c3-4827-9944-b654fd9268ee/mysql-bootstrap/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.370668 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_924dec42-f6c3-4827-9944-b654fd9268ee/galera/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.396510 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92c8e8bf-929a-41a7-9184-d416a49abd5c/mysql-bootstrap/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.585011 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92c8e8bf-929a-41a7-9184-d416a49abd5c/mysql-bootstrap/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.592858 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_92c8e8bf-929a-41a7-9184-d416a49abd5c/galera/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.626588 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_58778956-908f-40b7-921d-920c247222e7/setup-container/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.780549 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_58778956-908f-40b7-921d-920c247222e7/setup-container/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.786000 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abbae3c9-79f9-4195-8988-eb1137bfa8ee/setup-container/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.820196 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_58778956-908f-40b7-921d-920c247222e7/rabbitmq/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.940768 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abbae3c9-79f9-4195-8988-eb1137bfa8ee/setup-container/0.log" Oct 08 14:48:44 crc kubenswrapper[5065]: I1008 14:48:44.988687 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abbae3c9-79f9-4195-8988-eb1137bfa8ee/rabbitmq/0.log" Oct 08 14:48:54 crc kubenswrapper[5065]: I1008 14:48:54.375919 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:48:54 crc kubenswrapper[5065]: I1008 14:48:54.376371 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.247350 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-pjzvh_1b78b39c-e53e-4efa-96b8-185f730711fb/kube-rbac-proxy/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.317404 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-pjzvh_1b78b39c-e53e-4efa-96b8-185f730711fb/manager/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.411350 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-q2tsx_9848db5e-38fd-4867-a9a3-8945c5c4fc27/kube-rbac-proxy/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.464558 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-q2tsx_9848db5e-38fd-4867-a9a3-8945c5c4fc27/manager/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.618579 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-g9s57_72eb96ef-8ded-45ed-a440-be05e49c7667/kube-rbac-proxy/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.636569 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-g9s57_72eb96ef-8ded-45ed-a440-be05e49c7667/manager/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.683010 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j_4b2477a8-1c49-49e9-9ac3-4feb075a95a0/util/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.858774 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j_4b2477a8-1c49-49e9-9ac3-4feb075a95a0/pull/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.875234 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j_4b2477a8-1c49-49e9-9ac3-4feb075a95a0/util/0.log" Oct 08 14:48:58 crc kubenswrapper[5065]: I1008 14:48:58.896735 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j_4b2477a8-1c49-49e9-9ac3-4feb075a95a0/pull/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.008925 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j_4b2477a8-1c49-49e9-9ac3-4feb075a95a0/pull/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.048639 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j_4b2477a8-1c49-49e9-9ac3-4feb075a95a0/util/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.061059 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eee899de00471dca260c21f15d574ae705f89c0888bdad088fc990c0echdq2j_4b2477a8-1c49-49e9-9ac3-4feb075a95a0/extract/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.188829 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-rlkxt_d306130a-6424-4380-8de6-74adc212298d/kube-rbac-proxy/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.260874 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-xlm5g_8415abb2-d31f-443c-b458-775e281540a6/kube-rbac-proxy/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.292552 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-rlkxt_d306130a-6424-4380-8de6-74adc212298d/manager/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.362483 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-xlm5g_8415abb2-d31f-443c-b458-775e281540a6/manager/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.418653 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-x8kpn_8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba/kube-rbac-proxy/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.465609 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-x8kpn_8538f19d-d12a-4d6f-ae3c-f71fa9dfe0ba/manager/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.578581 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-gwsl7_ce67216a-bf27-40b0-8beb-bec511f71d94/kube-rbac-proxy/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.728824 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-dskvg_fc91f24e-897a-45d0-8119-d3a5e75e989d/kube-rbac-proxy/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.781623 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-gwsl7_ce67216a-bf27-40b0-8beb-bec511f71d94/manager/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.815444 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-dskvg_fc91f24e-897a-45d0-8119-d3a5e75e989d/manager/0.log" Oct 08 14:48:59 crc kubenswrapper[5065]: I1008 14:48:59.893453 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-p8j4j_c4e220c5-f7d7-41aa-b250-94c0fc693dd9/kube-rbac-proxy/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.016404 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-p8j4j_c4e220c5-f7d7-41aa-b250-94c0fc693dd9/manager/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.057824 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-zptvr_55bc4a42-9132-41b0-bd10-d05a51fff80e/kube-rbac-proxy/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.091940 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-zptvr_55bc4a42-9132-41b0-bd10-d05a51fff80e/manager/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.231017 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-rqvgq_6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a/kube-rbac-proxy/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.253770 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-rqvgq_6f6d22ca-e21b-4cc9-8640-ac38a35bbd7a/manager/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.366559 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-tlrpv_a1ef138e-172a-4b51-aaca-1bfb30b7cc3a/kube-rbac-proxy/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.413326 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-tlrpv_a1ef138e-172a-4b51-aaca-1bfb30b7cc3a/manager/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.477443 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-5k4b8_2e76f34c-8ac9-408e-96ca-0eaf5aa470cf/kube-rbac-proxy/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.655533 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-7fv6z_fb914e93-c33d-44ae-a713-7bd24af3faff/kube-rbac-proxy/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.663775 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-5k4b8_2e76f34c-8ac9-408e-96ca-0eaf5aa470cf/manager/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.687617 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-7fv6z_fb914e93-c33d-44ae-a713-7bd24af3faff/manager/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.821933 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-747747dfccng5kz_b72ec83c-136e-4cde-8aa9-e978bbe7cd2a/kube-rbac-proxy/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.864360 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-747747dfccng5kz_b72ec83c-136e-4cde-8aa9-e978bbe7cd2a/manager/0.log" Oct 08 14:49:00 crc kubenswrapper[5065]: I1008 14:49:00.997627 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8bc6b8f5b-s7fmg_cf758b71-afa0-4ca6-a481-4a01aa013427/kube-rbac-proxy/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.097045 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-55f65988b-48cvv_357f82f7-a9c7-456b-829b-e9ee64bf828e/kube-rbac-proxy/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.317025 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-55f65988b-48cvv_357f82f7-a9c7-456b-829b-e9ee64bf828e/operator/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.408192 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x8t65_a3797803-3939-430e-a68b-3861cc14e098/registry-server/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.511339 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79db49b9fb-mw7bb_9d528074-25d9-43df-80ac-e7f4aa8573bc/kube-rbac-proxy/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.715810 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79db49b9fb-mw7bb_9d528074-25d9-43df-80ac-e7f4aa8573bc/manager/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.745360 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-9wdcx_2b64adbd-0608-45f4-aaf8-3d7af011873e/kube-rbac-proxy/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.793652 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-9wdcx_2b64adbd-0608-45f4-aaf8-3d7af011873e/manager/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.848302 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8bc6b8f5b-s7fmg_cf758b71-afa0-4ca6-a481-4a01aa013427/manager/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.919836 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-j8b92_3d693cfe-0346-4970-ba03-dde30d33fb28/operator/0.log" Oct 08 14:49:01 crc kubenswrapper[5065]: I1008 14:49:01.986186 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-947sm_bfa86e67-08ac-4df7-84fe-c084e6c05bc1/kube-rbac-proxy/0.log" Oct 08 14:49:02 crc kubenswrapper[5065]: I1008 14:49:02.083087 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-947sm_bfa86e67-08ac-4df7-84fe-c084e6c05bc1/manager/0.log" Oct 08 14:49:02 crc kubenswrapper[5065]: I1008 14:49:02.113241 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76796d4c6b-rwmpv_b9279847-3be8-4917-b1e3-b2d4459f45de/kube-rbac-proxy/0.log" Oct 08 14:49:02 crc kubenswrapper[5065]: I1008 14:49:02.209962 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76796d4c6b-rwmpv_b9279847-3be8-4917-b1e3-b2d4459f45de/manager/0.log" Oct 08 14:49:02 crc kubenswrapper[5065]: I1008 14:49:02.224775 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56c698c775-nct4q_4af395f3-a5b6-4f08-9cf9-99ca8ef679f1/kube-rbac-proxy/0.log" Oct 08 14:49:02 crc kubenswrapper[5065]: I1008 14:49:02.299696 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56c698c775-nct4q_4af395f3-a5b6-4f08-9cf9-99ca8ef679f1/manager/0.log" Oct 08 14:49:02 crc kubenswrapper[5065]: I1008 14:49:02.369260 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7794bc6bd-kjqtn_24e4cd94-cd0b-440f-8574-d93134d9b63d/kube-rbac-proxy/0.log" Oct 08 14:49:02 crc kubenswrapper[5065]: I1008 14:49:02.371158 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7794bc6bd-kjqtn_24e4cd94-cd0b-440f-8574-d93134d9b63d/manager/0.log" Oct 08 14:49:17 crc kubenswrapper[5065]: I1008 14:49:17.692790 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c8bkc_025e6f00-f56b-4674-9cf8-6ddb57afe15f/control-plane-machine-set-operator/0.log" Oct 08 14:49:17 crc kubenswrapper[5065]: I1008 14:49:17.874790 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xr8vs_42132cd2-ec8f-47e2-8011-6f39c454977f/kube-rbac-proxy/0.log" Oct 08 14:49:17 crc kubenswrapper[5065]: I1008 14:49:17.913457 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xr8vs_42132cd2-ec8f-47e2-8011-6f39c454977f/machine-api-operator/0.log" Oct 08 14:49:24 crc kubenswrapper[5065]: I1008 14:49:24.375067 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:49:24 crc kubenswrapper[5065]: I1008 14:49:24.375602 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:49:24 crc kubenswrapper[5065]: I1008 14:49:24.375655 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:49:24 crc kubenswrapper[5065]: I1008 14:49:24.376331 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a2d5920533fb1103902f95c6619956e7541ada7a9d91b36d2c215022ce77856"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:49:24 crc kubenswrapper[5065]: I1008 14:49:24.376394 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://6a2d5920533fb1103902f95c6619956e7541ada7a9d91b36d2c215022ce77856" gracePeriod=600 Oct 08 14:49:25 crc kubenswrapper[5065]: I1008 14:49:25.011353 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="6a2d5920533fb1103902f95c6619956e7541ada7a9d91b36d2c215022ce77856" exitCode=0 Oct 08 14:49:25 crc kubenswrapper[5065]: I1008 14:49:25.011382 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"6a2d5920533fb1103902f95c6619956e7541ada7a9d91b36d2c215022ce77856"} Oct 08 14:49:25 crc kubenswrapper[5065]: I1008 14:49:25.011647 5065 scope.go:117] "RemoveContainer" containerID="41a8d3dc1dfda374a40e3b7c16b3b225c49af2ec3a59196050bfbeb046ee48c0" Oct 08 14:49:26 crc kubenswrapper[5065]: I1008 14:49:26.020364 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerStarted","Data":"ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec"} Oct 08 14:49:30 crc kubenswrapper[5065]: I1008 14:49:30.074121 5065 scope.go:117] "RemoveContainer" containerID="c47c4d5ca4b2e677faef75fa4e39bf4953b33c59f4467d06cf34eae83aa1c74d" Oct 08 14:49:30 crc kubenswrapper[5065]: I1008 14:49:30.105150 5065 scope.go:117] "RemoveContainer" containerID="829a48c0447ffcd1089e224970101d443341247e7749360e9438cd83eb4c01a9" Oct 08 14:49:30 crc kubenswrapper[5065]: I1008 14:49:30.132931 5065 scope.go:117] "RemoveContainer" containerID="da8d01ac05a54e79a180c04198fff0d901650dce879ab0dd7687512c643b88bd" Oct 08 14:49:30 crc kubenswrapper[5065]: I1008 14:49:30.431167 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-7d4cc89fcb-7vwzk_b31a82f5-8cf6-4b86-afc0-281fb7b63aa6/cert-manager-controller/0.log" Oct 08 14:49:30 crc kubenswrapper[5065]: I1008 14:49:30.575052 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7d9f95dbf-vl996_018f649c-4363-4d87-82b8-e1f93c39d71f/cert-manager-cainjector/0.log" Oct 08 14:49:30 crc kubenswrapper[5065]: I1008 14:49:30.655143 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-d969966f-w2vbv_fb7a3520-d62f-4b03-9d1a-f96731b4cb35/cert-manager-webhook/0.log" Oct 08 14:49:42 crc kubenswrapper[5065]: I1008 14:49:42.112520 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-qmx4z_dde1f866-08d3-4a05-875b-c11f61ec5ed9/nmstate-console-plugin/0.log" Oct 08 14:49:42 crc kubenswrapper[5065]: I1008 14:49:42.216337 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-s72n4_71e6a10c-0de2-43a2-b18f-9610189ccc7d/nmstate-handler/0.log" Oct 08 14:49:42 crc kubenswrapper[5065]: I1008 14:49:42.239685 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-m44l9_4c1ce008-ad03-4842-8e85-c214bb157619/kube-rbac-proxy/0.log" Oct 08 14:49:42 crc kubenswrapper[5065]: I1008 14:49:42.289177 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-m44l9_4c1ce008-ad03-4842-8e85-c214bb157619/nmstate-metrics/0.log" Oct 08 14:49:42 crc kubenswrapper[5065]: I1008 14:49:42.425035 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-jgq8v_cdd97313-0ee9-4f04-9cc0-f017e4440a89/nmstate-operator/0.log" Oct 08 14:49:42 crc kubenswrapper[5065]: I1008 14:49:42.457199 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-nqrvj_42352040-d859-4004-8b46-a472048a2c0a/nmstate-webhook/0.log" Oct 08 14:49:55 crc kubenswrapper[5065]: I1008 14:49:55.739020 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-f288k_7ccfc78e-a0ef-4111-bb6a-a09629363cc1/kube-rbac-proxy/0.log" Oct 08 14:49:55 crc kubenswrapper[5065]: I1008 14:49:55.938990 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-frr-files/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.189705 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-f288k_7ccfc78e-a0ef-4111-bb6a-a09629363cc1/controller/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.191130 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-metrics/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.197917 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-frr-files/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.212668 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-reloader/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.333237 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-reloader/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.509127 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-metrics/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.530220 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-reloader/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.531996 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-frr-files/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.533040 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-metrics/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.689352 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-frr-files/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.722225 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-metrics/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.739122 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/controller/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.745582 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/cp-reloader/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.905014 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/frr-metrics/0.log" Oct 08 14:49:56 crc kubenswrapper[5065]: I1008 14:49:56.929273 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/kube-rbac-proxy/0.log" Oct 08 14:49:57 crc kubenswrapper[5065]: I1008 14:49:57.009176 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/kube-rbac-proxy-frr/0.log" Oct 08 14:49:57 crc kubenswrapper[5065]: I1008 14:49:57.083563 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/reloader/0.log" Oct 08 14:49:57 crc kubenswrapper[5065]: I1008 14:49:57.190736 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-fzmv8_bed73333-26e4-481b-a2f6-4161b3208832/frr-k8s-webhook-server/0.log" Oct 08 14:49:57 crc kubenswrapper[5065]: I1008 14:49:57.337585 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5fbf659dfd-bttmf_f4a0f022-1142-4830-aaff-7458bd6c1c94/manager/0.log" Oct 08 14:49:57 crc kubenswrapper[5065]: I1008 14:49:57.497237 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-f4447b86b-749tv_aa74013a-ec8e-45ca-9150-99e2a326e28f/webhook-server/0.log" Oct 08 14:49:57 crc kubenswrapper[5065]: I1008 14:49:57.610287 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fqqvb_65acc9b9-f518-4f3a-857f-e86f6e453478/kube-rbac-proxy/0.log" Oct 08 14:49:58 crc kubenswrapper[5065]: I1008 14:49:58.184236 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fqqvb_65acc9b9-f518-4f3a-857f-e86f6e453478/speaker/0.log" Oct 08 14:49:58 crc kubenswrapper[5065]: I1008 14:49:58.455633 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jkbrg_8e4d5cef-564b-426e-9481-afef3c9f6b52/frr/0.log" Oct 08 14:50:02 crc kubenswrapper[5065]: I1008 14:50:02.963901 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lfc7t"] Oct 08 14:50:02 crc kubenswrapper[5065]: I1008 14:50:02.966139 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:02 crc kubenswrapper[5065]: I1008 14:50:02.983758 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfc7t"] Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.079636 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmkv\" (UniqueName: \"kubernetes.io/projected/1c513069-7194-4048-b4d2-75b33c12a946-kube-api-access-mcmkv\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.079722 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-catalog-content\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.079983 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-utilities\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.181452 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-utilities\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.181519 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcmkv\" (UniqueName: \"kubernetes.io/projected/1c513069-7194-4048-b4d2-75b33c12a946-kube-api-access-mcmkv\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.181564 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-catalog-content\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.181958 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-utilities\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.182058 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-catalog-content\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.206434 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcmkv\" (UniqueName: \"kubernetes.io/projected/1c513069-7194-4048-b4d2-75b33c12a946-kube-api-access-mcmkv\") pod \"redhat-marketplace-lfc7t\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.288488 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:03 crc kubenswrapper[5065]: I1008 14:50:03.711087 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfc7t"] Oct 08 14:50:04 crc kubenswrapper[5065]: I1008 14:50:04.316001 5065 generic.go:334] "Generic (PLEG): container finished" podID="1c513069-7194-4048-b4d2-75b33c12a946" containerID="f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a" exitCode=0 Oct 08 14:50:04 crc kubenswrapper[5065]: I1008 14:50:04.316056 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfc7t" event={"ID":"1c513069-7194-4048-b4d2-75b33c12a946","Type":"ContainerDied","Data":"f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a"} Oct 08 14:50:04 crc kubenswrapper[5065]: I1008 14:50:04.316339 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfc7t" event={"ID":"1c513069-7194-4048-b4d2-75b33c12a946","Type":"ContainerStarted","Data":"db3631ca74fe59d2d1095f96a0bf3872d7f78d57daea20a475ff4590b4cd574b"} Oct 08 14:50:05 crc kubenswrapper[5065]: I1008 14:50:05.326631 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfc7t" event={"ID":"1c513069-7194-4048-b4d2-75b33c12a946","Type":"ContainerStarted","Data":"9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60"} Oct 08 14:50:06 crc kubenswrapper[5065]: I1008 14:50:06.336454 5065 generic.go:334] "Generic (PLEG): container finished" podID="1c513069-7194-4048-b4d2-75b33c12a946" containerID="9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60" exitCode=0 Oct 08 14:50:06 crc kubenswrapper[5065]: I1008 14:50:06.336493 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfc7t" event={"ID":"1c513069-7194-4048-b4d2-75b33c12a946","Type":"ContainerDied","Data":"9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60"} Oct 08 14:50:07 crc kubenswrapper[5065]: I1008 14:50:07.346624 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfc7t" event={"ID":"1c513069-7194-4048-b4d2-75b33c12a946","Type":"ContainerStarted","Data":"09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206"} Oct 08 14:50:07 crc kubenswrapper[5065]: I1008 14:50:07.370158 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lfc7t" podStartSLOduration=2.93779959 podStartE2EDuration="5.370135085s" podCreationTimestamp="2025-10-08 14:50:02 +0000 UTC" firstStartedPulling="2025-10-08 14:50:04.317992687 +0000 UTC m=+5506.095374454" lastFinishedPulling="2025-10-08 14:50:06.750328192 +0000 UTC m=+5508.527709949" observedRunningTime="2025-10-08 14:50:07.363936246 +0000 UTC m=+5509.141318013" watchObservedRunningTime="2025-10-08 14:50:07.370135085 +0000 UTC m=+5509.147516842" Oct 08 14:50:09 crc kubenswrapper[5065]: I1008 14:50:09.789030 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2_75792fb1-86a8-4681-b318-ddce6b276cac/util/0.log" Oct 08 14:50:09 crc kubenswrapper[5065]: I1008 14:50:09.994960 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2_75792fb1-86a8-4681-b318-ddce6b276cac/util/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.033960 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2_75792fb1-86a8-4681-b318-ddce6b276cac/pull/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.046781 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2_75792fb1-86a8-4681-b318-ddce6b276cac/pull/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.144786 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2_75792fb1-86a8-4681-b318-ddce6b276cac/util/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.203697 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2_75792fb1-86a8-4681-b318-ddce6b276cac/extract/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.208932 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69mlsd2_75792fb1-86a8-4681-b318-ddce6b276cac/pull/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.315254 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m_ca6d2539-bba8-4625-a049-e3fa4403c861/util/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.541582 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m_ca6d2539-bba8-4625-a049-e3fa4403c861/util/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.562404 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m_ca6d2539-bba8-4625-a049-e3fa4403c861/pull/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.581683 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m_ca6d2539-bba8-4625-a049-e3fa4403c861/pull/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.785794 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m_ca6d2539-bba8-4625-a049-e3fa4403c861/extract/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.793749 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m_ca6d2539-bba8-4625-a049-e3fa4403c861/util/0.log" Oct 08 14:50:10 crc kubenswrapper[5065]: I1008 14:50:10.862281 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2knz9m_ca6d2539-bba8-4625-a049-e3fa4403c861/pull/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.007693 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h6s5m_7db76d2a-3052-40ff-9e53-f6ffac1c7aa0/extract-utilities/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.124075 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h6s5m_7db76d2a-3052-40ff-9e53-f6ffac1c7aa0/extract-content/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.183251 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h6s5m_7db76d2a-3052-40ff-9e53-f6ffac1c7aa0/extract-utilities/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.197863 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h6s5m_7db76d2a-3052-40ff-9e53-f6ffac1c7aa0/extract-content/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.334696 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h6s5m_7db76d2a-3052-40ff-9e53-f6ffac1c7aa0/extract-utilities/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.416517 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h6s5m_7db76d2a-3052-40ff-9e53-f6ffac1c7aa0/extract-content/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.526864 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-twtsk_663bba2d-3cb7-4ead-889f-062b9ffc9a61/extract-utilities/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.777195 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-twtsk_663bba2d-3cb7-4ead-889f-062b9ffc9a61/extract-content/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.783989 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-h6s5m_7db76d2a-3052-40ff-9e53-f6ffac1c7aa0/registry-server/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.788981 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-twtsk_663bba2d-3cb7-4ead-889f-062b9ffc9a61/extract-utilities/0.log" Oct 08 14:50:11 crc kubenswrapper[5065]: I1008 14:50:11.849622 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-twtsk_663bba2d-3cb7-4ead-889f-062b9ffc9a61/extract-content/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.022945 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-twtsk_663bba2d-3cb7-4ead-889f-062b9ffc9a61/extract-utilities/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.035687 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-twtsk_663bba2d-3cb7-4ead-889f-062b9ffc9a61/extract-content/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.281344 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83/util/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.478432 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83/pull/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.535863 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83/util/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.581599 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83/pull/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.727956 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83/pull/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.777281 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83/util/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.835434 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c6kgmb_329a8c88-c31a-4ec2-9a5b-7f65ecc3ba83/extract/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.841838 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-twtsk_663bba2d-3cb7-4ead-889f-062b9ffc9a61/registry-server/0.log" Oct 08 14:50:12 crc kubenswrapper[5065]: I1008 14:50:12.961982 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vwmng_3dc7f2a9-f40a-4142-b340-eef8a51a976c/marketplace-operator/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.008625 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lfc7t_1c513069-7194-4048-b4d2-75b33c12a946/extract-utilities/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.170027 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lfc7t_1c513069-7194-4048-b4d2-75b33c12a946/extract-utilities/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.183618 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lfc7t_1c513069-7194-4048-b4d2-75b33c12a946/extract-content/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.183620 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lfc7t_1c513069-7194-4048-b4d2-75b33c12a946/extract-content/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.289584 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.289640 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.329041 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lfc7t_1c513069-7194-4048-b4d2-75b33c12a946/registry-server/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.334392 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.346970 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lfc7t_1c513069-7194-4048-b4d2-75b33c12a946/extract-content/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.356037 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lfc7t_1c513069-7194-4048-b4d2-75b33c12a946/extract-utilities/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.383013 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlxs2_f3c95649-b562-43e6-ba51-25625b9df60f/extract-utilities/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.438964 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.568544 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfc7t"] Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.570911 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlxs2_f3c95649-b562-43e6-ba51-25625b9df60f/extract-content/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.574834 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlxs2_f3c95649-b562-43e6-ba51-25625b9df60f/extract-utilities/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.596835 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlxs2_f3c95649-b562-43e6-ba51-25625b9df60f/extract-content/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.713362 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlxs2_f3c95649-b562-43e6-ba51-25625b9df60f/extract-utilities/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.761605 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlxs2_f3c95649-b562-43e6-ba51-25625b9df60f/extract-content/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.777515 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hzrfx_996097d7-4c9a-4798-8e9e-b3e26b7dffda/extract-utilities/0.log" Oct 08 14:50:13 crc kubenswrapper[5065]: I1008 14:50:13.989094 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlxs2_f3c95649-b562-43e6-ba51-25625b9df60f/registry-server/0.log" Oct 08 14:50:14 crc kubenswrapper[5065]: I1008 14:50:14.022742 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hzrfx_996097d7-4c9a-4798-8e9e-b3e26b7dffda/extract-utilities/0.log" Oct 08 14:50:14 crc kubenswrapper[5065]: I1008 14:50:14.059029 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hzrfx_996097d7-4c9a-4798-8e9e-b3e26b7dffda/extract-content/0.log" Oct 08 14:50:14 crc kubenswrapper[5065]: I1008 14:50:14.062847 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hzrfx_996097d7-4c9a-4798-8e9e-b3e26b7dffda/extract-content/0.log" Oct 08 14:50:14 crc kubenswrapper[5065]: I1008 14:50:14.179353 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hzrfx_996097d7-4c9a-4798-8e9e-b3e26b7dffda/extract-content/0.log" Oct 08 14:50:14 crc kubenswrapper[5065]: I1008 14:50:14.201248 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hzrfx_996097d7-4c9a-4798-8e9e-b3e26b7dffda/extract-utilities/0.log" Oct 08 14:50:14 crc kubenswrapper[5065]: I1008 14:50:14.527701 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hzrfx_996097d7-4c9a-4798-8e9e-b3e26b7dffda/registry-server/0.log" Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.413234 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lfc7t" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="registry-server" containerID="cri-o://09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206" gracePeriod=2 Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.865751 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.883453 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcmkv\" (UniqueName: \"kubernetes.io/projected/1c513069-7194-4048-b4d2-75b33c12a946-kube-api-access-mcmkv\") pod \"1c513069-7194-4048-b4d2-75b33c12a946\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.883808 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-utilities\") pod \"1c513069-7194-4048-b4d2-75b33c12a946\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.883967 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-catalog-content\") pod \"1c513069-7194-4048-b4d2-75b33c12a946\" (UID: \"1c513069-7194-4048-b4d2-75b33c12a946\") " Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.884952 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-utilities" (OuterVolumeSpecName: "utilities") pod "1c513069-7194-4048-b4d2-75b33c12a946" (UID: "1c513069-7194-4048-b4d2-75b33c12a946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.888294 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c513069-7194-4048-b4d2-75b33c12a946-kube-api-access-mcmkv" (OuterVolumeSpecName: "kube-api-access-mcmkv") pod "1c513069-7194-4048-b4d2-75b33c12a946" (UID: "1c513069-7194-4048-b4d2-75b33c12a946"). InnerVolumeSpecName "kube-api-access-mcmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.912117 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c513069-7194-4048-b4d2-75b33c12a946" (UID: "1c513069-7194-4048-b4d2-75b33c12a946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.986563 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcmkv\" (UniqueName: \"kubernetes.io/projected/1c513069-7194-4048-b4d2-75b33c12a946-kube-api-access-mcmkv\") on node \"crc\" DevicePath \"\"" Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.986610 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:50:15 crc kubenswrapper[5065]: I1008 14:50:15.986626 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c513069-7194-4048-b4d2-75b33c12a946-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.421821 5065 generic.go:334] "Generic (PLEG): container finished" podID="1c513069-7194-4048-b4d2-75b33c12a946" containerID="09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206" exitCode=0 Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.421870 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfc7t" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.422768 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfc7t" event={"ID":"1c513069-7194-4048-b4d2-75b33c12a946","Type":"ContainerDied","Data":"09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206"} Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.422901 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfc7t" event={"ID":"1c513069-7194-4048-b4d2-75b33c12a946","Type":"ContainerDied","Data":"db3631ca74fe59d2d1095f96a0bf3872d7f78d57daea20a475ff4590b4cd574b"} Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.422997 5065 scope.go:117] "RemoveContainer" containerID="09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.443967 5065 scope.go:117] "RemoveContainer" containerID="9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.466119 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfc7t"] Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.472929 5065 scope.go:117] "RemoveContainer" containerID="f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.473693 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfc7t"] Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.520725 5065 scope.go:117] "RemoveContainer" containerID="09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206" Oct 08 14:50:16 crc kubenswrapper[5065]: E1008 14:50:16.521275 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206\": container with ID starting with 09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206 not found: ID does not exist" containerID="09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.521305 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206"} err="failed to get container status \"09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206\": rpc error: code = NotFound desc = could not find container \"09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206\": container with ID starting with 09a291a29557e1ccac7539416ec486f92731621fb1b4c55092cc48e77b46a206 not found: ID does not exist" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.521326 5065 scope.go:117] "RemoveContainer" containerID="9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60" Oct 08 14:50:16 crc kubenswrapper[5065]: E1008 14:50:16.521764 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60\": container with ID starting with 9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60 not found: ID does not exist" containerID="9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.521808 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60"} err="failed to get container status \"9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60\": rpc error: code = NotFound desc = could not find container \"9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60\": container with ID starting with 9cdbeca851c2944edc208f64d015f5db2088eb9e29d4a2d28572593544194e60 not found: ID does not exist" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.521840 5065 scope.go:117] "RemoveContainer" containerID="f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a" Oct 08 14:50:16 crc kubenswrapper[5065]: E1008 14:50:16.522205 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a\": container with ID starting with f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a not found: ID does not exist" containerID="f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.522251 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a"} err="failed to get container status \"f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a\": rpc error: code = NotFound desc = could not find container \"f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a\": container with ID starting with f88b50f313b1218d9ec476e1e50774773e8732ea3db53e24e9c352c2cb78fa7a not found: ID does not exist" Oct 08 14:50:16 crc kubenswrapper[5065]: I1008 14:50:16.892500 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c513069-7194-4048-b4d2-75b33c12a946" path="/var/lib/kubelet/pods/1c513069-7194-4048-b4d2-75b33c12a946/volumes" Oct 08 14:50:52 crc kubenswrapper[5065]: I1008 14:50:52.966389 5065 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-thv4p"] Oct 08 14:50:52 crc kubenswrapper[5065]: E1008 14:50:52.967242 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="registry-server" Oct 08 14:50:52 crc kubenswrapper[5065]: I1008 14:50:52.967257 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="registry-server" Oct 08 14:50:52 crc kubenswrapper[5065]: E1008 14:50:52.967276 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="extract-utilities" Oct 08 14:50:52 crc kubenswrapper[5065]: I1008 14:50:52.967286 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="extract-utilities" Oct 08 14:50:52 crc kubenswrapper[5065]: E1008 14:50:52.967314 5065 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="extract-content" Oct 08 14:50:52 crc kubenswrapper[5065]: I1008 14:50:52.967323 5065 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="extract-content" Oct 08 14:50:52 crc kubenswrapper[5065]: I1008 14:50:52.967568 5065 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c513069-7194-4048-b4d2-75b33c12a946" containerName="registry-server" Oct 08 14:50:52 crc kubenswrapper[5065]: I1008 14:50:52.970092 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:52 crc kubenswrapper[5065]: I1008 14:50:52.984464 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thv4p"] Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.097318 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-utilities\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.097484 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzsf\" (UniqueName: \"kubernetes.io/projected/c8a13909-0942-45ca-b554-bfabcf06614f-kube-api-access-jlzsf\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.097534 5065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-catalog-content\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.199075 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlzsf\" (UniqueName: \"kubernetes.io/projected/c8a13909-0942-45ca-b554-bfabcf06614f-kube-api-access-jlzsf\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.199146 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-catalog-content\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.199210 5065 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-utilities\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.199650 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-utilities\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.199785 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-catalog-content\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.218831 5065 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlzsf\" (UniqueName: \"kubernetes.io/projected/c8a13909-0942-45ca-b554-bfabcf06614f-kube-api-access-jlzsf\") pod \"redhat-operators-thv4p\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.299829 5065 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:50:53 crc kubenswrapper[5065]: I1008 14:50:53.731904 5065 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thv4p"] Oct 08 14:50:54 crc kubenswrapper[5065]: I1008 14:50:54.694192 5065 generic.go:334] "Generic (PLEG): container finished" podID="c8a13909-0942-45ca-b554-bfabcf06614f" containerID="2de1b90e234699adabd4a0b2ac6049119da258f70d6d9a6c8e430498f0a4450d" exitCode=0 Oct 08 14:50:54 crc kubenswrapper[5065]: I1008 14:50:54.694242 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv4p" event={"ID":"c8a13909-0942-45ca-b554-bfabcf06614f","Type":"ContainerDied","Data":"2de1b90e234699adabd4a0b2ac6049119da258f70d6d9a6c8e430498f0a4450d"} Oct 08 14:50:54 crc kubenswrapper[5065]: I1008 14:50:54.695623 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv4p" event={"ID":"c8a13909-0942-45ca-b554-bfabcf06614f","Type":"ContainerStarted","Data":"e20b5c52915e7732584a87af3647b88ac86eed5e34d21ba7d6cac41671a999bd"} Oct 08 14:50:56 crc kubenswrapper[5065]: I1008 14:50:56.717640 5065 generic.go:334] "Generic (PLEG): container finished" podID="c8a13909-0942-45ca-b554-bfabcf06614f" containerID="d7c0b0636b559359d62b274e21d07cdb00acb3e33cc57f53b2e8ed99ee863206" exitCode=0 Oct 08 14:50:56 crc kubenswrapper[5065]: I1008 14:50:56.717742 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv4p" event={"ID":"c8a13909-0942-45ca-b554-bfabcf06614f","Type":"ContainerDied","Data":"d7c0b0636b559359d62b274e21d07cdb00acb3e33cc57f53b2e8ed99ee863206"} Oct 08 14:50:57 crc kubenswrapper[5065]: I1008 14:50:57.728645 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv4p" event={"ID":"c8a13909-0942-45ca-b554-bfabcf06614f","Type":"ContainerStarted","Data":"94d88c23acf0efc7b17bc5cd5be928e4350e06233cfe74f5834479ab8329f4ee"} Oct 08 14:50:57 crc kubenswrapper[5065]: I1008 14:50:57.750126 5065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-thv4p" podStartSLOduration=3.342597048 podStartE2EDuration="5.750107363s" podCreationTimestamp="2025-10-08 14:50:52 +0000 UTC" firstStartedPulling="2025-10-08 14:50:54.699440765 +0000 UTC m=+5556.476822522" lastFinishedPulling="2025-10-08 14:50:57.10695107 +0000 UTC m=+5558.884332837" observedRunningTime="2025-10-08 14:50:57.742897546 +0000 UTC m=+5559.520279313" watchObservedRunningTime="2025-10-08 14:50:57.750107363 +0000 UTC m=+5559.527489120" Oct 08 14:51:03 crc kubenswrapper[5065]: I1008 14:51:03.300966 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:51:03 crc kubenswrapper[5065]: I1008 14:51:03.301638 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:51:03 crc kubenswrapper[5065]: I1008 14:51:03.400023 5065 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:51:03 crc kubenswrapper[5065]: I1008 14:51:03.821702 5065 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:51:03 crc kubenswrapper[5065]: I1008 14:51:03.882451 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thv4p"] Oct 08 14:51:05 crc kubenswrapper[5065]: I1008 14:51:05.791123 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-thv4p" podUID="c8a13909-0942-45ca-b554-bfabcf06614f" containerName="registry-server" containerID="cri-o://94d88c23acf0efc7b17bc5cd5be928e4350e06233cfe74f5834479ab8329f4ee" gracePeriod=2 Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.806463 5065 generic.go:334] "Generic (PLEG): container finished" podID="c8a13909-0942-45ca-b554-bfabcf06614f" containerID="94d88c23acf0efc7b17bc5cd5be928e4350e06233cfe74f5834479ab8329f4ee" exitCode=0 Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.806701 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv4p" event={"ID":"c8a13909-0942-45ca-b554-bfabcf06614f","Type":"ContainerDied","Data":"94d88c23acf0efc7b17bc5cd5be928e4350e06233cfe74f5834479ab8329f4ee"} Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.806752 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv4p" event={"ID":"c8a13909-0942-45ca-b554-bfabcf06614f","Type":"ContainerDied","Data":"e20b5c52915e7732584a87af3647b88ac86eed5e34d21ba7d6cac41671a999bd"} Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.806763 5065 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e20b5c52915e7732584a87af3647b88ac86eed5e34d21ba7d6cac41671a999bd" Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.852744 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.927937 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlzsf\" (UniqueName: \"kubernetes.io/projected/c8a13909-0942-45ca-b554-bfabcf06614f-kube-api-access-jlzsf\") pod \"c8a13909-0942-45ca-b554-bfabcf06614f\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.928104 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-utilities\") pod \"c8a13909-0942-45ca-b554-bfabcf06614f\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.928223 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-catalog-content\") pod \"c8a13909-0942-45ca-b554-bfabcf06614f\" (UID: \"c8a13909-0942-45ca-b554-bfabcf06614f\") " Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.930323 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-utilities" (OuterVolumeSpecName: "utilities") pod "c8a13909-0942-45ca-b554-bfabcf06614f" (UID: "c8a13909-0942-45ca-b554-bfabcf06614f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:51:06 crc kubenswrapper[5065]: I1008 14:51:06.934689 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a13909-0942-45ca-b554-bfabcf06614f-kube-api-access-jlzsf" (OuterVolumeSpecName: "kube-api-access-jlzsf") pod "c8a13909-0942-45ca-b554-bfabcf06614f" (UID: "c8a13909-0942-45ca-b554-bfabcf06614f"). InnerVolumeSpecName "kube-api-access-jlzsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:51:07 crc kubenswrapper[5065]: I1008 14:51:07.018848 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8a13909-0942-45ca-b554-bfabcf06614f" (UID: "c8a13909-0942-45ca-b554-bfabcf06614f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:51:07 crc kubenswrapper[5065]: I1008 14:51:07.030256 5065 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:07 crc kubenswrapper[5065]: I1008 14:51:07.030298 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlzsf\" (UniqueName: \"kubernetes.io/projected/c8a13909-0942-45ca-b554-bfabcf06614f-kube-api-access-jlzsf\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:07 crc kubenswrapper[5065]: I1008 14:51:07.030315 5065 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a13909-0942-45ca-b554-bfabcf06614f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:07 crc kubenswrapper[5065]: I1008 14:51:07.815209 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv4p" Oct 08 14:51:07 crc kubenswrapper[5065]: I1008 14:51:07.853357 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thv4p"] Oct 08 14:51:07 crc kubenswrapper[5065]: I1008 14:51:07.856402 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-thv4p"] Oct 08 14:51:08 crc kubenswrapper[5065]: I1008 14:51:08.882278 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a13909-0942-45ca-b554-bfabcf06614f" path="/var/lib/kubelet/pods/c8a13909-0942-45ca-b554-bfabcf06614f/volumes" Oct 08 14:51:33 crc kubenswrapper[5065]: I1008 14:51:33.043270 5065 generic.go:334] "Generic (PLEG): container finished" podID="8fa21b44-d5b3-4149-a902-5e46b10f3318" containerID="7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa" exitCode=0 Oct 08 14:51:33 crc kubenswrapper[5065]: I1008 14:51:33.043370 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-m8x8w/must-gather-kxttl" event={"ID":"8fa21b44-d5b3-4149-a902-5e46b10f3318","Type":"ContainerDied","Data":"7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa"} Oct 08 14:51:33 crc kubenswrapper[5065]: I1008 14:51:33.046080 5065 scope.go:117] "RemoveContainer" containerID="7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa" Oct 08 14:51:33 crc kubenswrapper[5065]: I1008 14:51:33.880047 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-m8x8w_must-gather-kxttl_8fa21b44-d5b3-4149-a902-5e46b10f3318/gather/0.log" Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.238678 5065 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-m8x8w/must-gather-kxttl"] Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.239317 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-m8x8w/must-gather-kxttl" podUID="8fa21b44-d5b3-4149-a902-5e46b10f3318" containerName="copy" containerID="cri-o://4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202" gracePeriod=2 Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.246878 5065 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-m8x8w/must-gather-kxttl"] Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.694324 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-m8x8w_must-gather-kxttl_8fa21b44-d5b3-4149-a902-5e46b10f3318/copy/0.log" Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.695074 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.810777 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fa21b44-d5b3-4149-a902-5e46b10f3318-must-gather-output\") pod \"8fa21b44-d5b3-4149-a902-5e46b10f3318\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.810914 5065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqqgn\" (UniqueName: \"kubernetes.io/projected/8fa21b44-d5b3-4149-a902-5e46b10f3318-kube-api-access-vqqgn\") pod \"8fa21b44-d5b3-4149-a902-5e46b10f3318\" (UID: \"8fa21b44-d5b3-4149-a902-5e46b10f3318\") " Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.821506 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa21b44-d5b3-4149-a902-5e46b10f3318-kube-api-access-vqqgn" (OuterVolumeSpecName: "kube-api-access-vqqgn") pod "8fa21b44-d5b3-4149-a902-5e46b10f3318" (UID: "8fa21b44-d5b3-4149-a902-5e46b10f3318"). InnerVolumeSpecName "kube-api-access-vqqgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.885561 5065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa21b44-d5b3-4149-a902-5e46b10f3318-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8fa21b44-d5b3-4149-a902-5e46b10f3318" (UID: "8fa21b44-d5b3-4149-a902-5e46b10f3318"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.913053 5065 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8fa21b44-d5b3-4149-a902-5e46b10f3318-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:41 crc kubenswrapper[5065]: I1008 14:51:41.913093 5065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqqgn\" (UniqueName: \"kubernetes.io/projected/8fa21b44-d5b3-4149-a902-5e46b10f3318-kube-api-access-vqqgn\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.126173 5065 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-m8x8w_must-gather-kxttl_8fa21b44-d5b3-4149-a902-5e46b10f3318/copy/0.log" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.126659 5065 generic.go:334] "Generic (PLEG): container finished" podID="8fa21b44-d5b3-4149-a902-5e46b10f3318" containerID="4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202" exitCode=143 Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.126705 5065 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-m8x8w/must-gather-kxttl" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.126710 5065 scope.go:117] "RemoveContainer" containerID="4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.152315 5065 scope.go:117] "RemoveContainer" containerID="7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.211083 5065 scope.go:117] "RemoveContainer" containerID="4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202" Oct 08 14:51:42 crc kubenswrapper[5065]: E1008 14:51:42.214286 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202\": container with ID starting with 4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202 not found: ID does not exist" containerID="4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.214341 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202"} err="failed to get container status \"4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202\": rpc error: code = NotFound desc = could not find container \"4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202\": container with ID starting with 4a027ae71561063534a72e7228c5c24778aeccd9f0f525140e8d37f8e7606202 not found: ID does not exist" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.214376 5065 scope.go:117] "RemoveContainer" containerID="7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa" Oct 08 14:51:42 crc kubenswrapper[5065]: E1008 14:51:42.214969 5065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa\": container with ID starting with 7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa not found: ID does not exist" containerID="7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.215026 5065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa"} err="failed to get container status \"7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa\": rpc error: code = NotFound desc = could not find container \"7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa\": container with ID starting with 7d49d41183e48a6dc6a7e294c7bd5fb85dd2f8a48b19d381ad9bc1356aa07faa not found: ID does not exist" Oct 08 14:51:42 crc kubenswrapper[5065]: I1008 14:51:42.881212 5065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa21b44-d5b3-4149-a902-5e46b10f3318" path="/var/lib/kubelet/pods/8fa21b44-d5b3-4149-a902-5e46b10f3318/volumes" Oct 08 14:51:54 crc kubenswrapper[5065]: I1008 14:51:54.375911 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:51:54 crc kubenswrapper[5065]: I1008 14:51:54.376570 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:52:24 crc kubenswrapper[5065]: I1008 14:52:24.375910 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:52:24 crc kubenswrapper[5065]: I1008 14:52:24.376437 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:52:30 crc kubenswrapper[5065]: I1008 14:52:30.256017 5065 scope.go:117] "RemoveContainer" containerID="2fb5cc662f35430723733128b4b88ce80dac8adfd2af77d2ae37d4a42c49734a" Oct 08 14:52:30 crc kubenswrapper[5065]: I1008 14:52:30.294808 5065 scope.go:117] "RemoveContainer" containerID="f67cebe2ee0e925b1883ac7e3f9363e64856c54f01a9d2ac67c3720768f058d1" Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.375464 5065 patch_prober.go:28] interesting pod/machine-config-daemon-f2pbj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.376081 5065 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.376134 5065 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.377045 5065 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec"} pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.377105 5065 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerName="machine-config-daemon" containerID="cri-o://ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec" gracePeriod=600 Oct 08 14:52:54 crc kubenswrapper[5065]: E1008 14:52:54.503787 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.784889 5065 generic.go:334] "Generic (PLEG): container finished" podID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" containerID="ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec" exitCode=0 Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.784933 5065 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" event={"ID":"0ee6fc83-d6a5-4808-bea3-6fa4978bad1f","Type":"ContainerDied","Data":"ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec"} Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.785306 5065 scope.go:117] "RemoveContainer" containerID="6a2d5920533fb1103902f95c6619956e7541ada7a9d91b36d2c215022ce77856" Oct 08 14:52:54 crc kubenswrapper[5065]: I1008 14:52:54.786132 5065 scope.go:117] "RemoveContainer" containerID="ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec" Oct 08 14:52:54 crc kubenswrapper[5065]: E1008 14:52:54.786661 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:53:08 crc kubenswrapper[5065]: I1008 14:53:08.876991 5065 scope.go:117] "RemoveContainer" containerID="ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec" Oct 08 14:53:08 crc kubenswrapper[5065]: E1008 14:53:08.877692 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f" Oct 08 14:53:20 crc kubenswrapper[5065]: I1008 14:53:20.874545 5065 scope.go:117] "RemoveContainer" containerID="ab447fd9e35dc2a113badc8451454e8b87660b54c7e6e35d13bf8c60219cdeec" Oct 08 14:53:20 crc kubenswrapper[5065]: E1008 14:53:20.875337 5065 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f2pbj_openshift-machine-config-operator(0ee6fc83-d6a5-4808-bea3-6fa4978bad1f)\"" pod="openshift-machine-config-operator/machine-config-daemon-f2pbj" podUID="0ee6fc83-d6a5-4808-bea3-6fa4978bad1f"